# Openai Cdn > Documentation for Openai Cdn ## Pages - [Openai Cdn Documentation](openai-cdn-documentation.md) - [Libraries](libraries.md): Set up your development environment to use the OpenAI API with an SDK in your - [Text generation](text-generation.md): Learn how to prompt a model to generate text. - [Upload a PDF we will reference in the variables](upload-a-pdf-we-will-reference-in-the-variables.md): file = client.files.create( - [Assume you have already uploaded the PDF and obtained FILE_ID](assume-you-have-already-uploaded-the-pdf-and-obtained-file-id.md): curl -H "Authorization: Bearer $OPENAI_API_KEY" -H "Content-Type: application... - [GPT Actions library](gpt-actions-library.md): Build and integrate GPT Actions for common applications. - [GPT Action authentication](gpt-action-authentication.md): Learn authentication options for GPT Actions. - [Data retrieval with GPT Actions](data-retrieval-with-gpt-actions.md): Retrieve data using APIs and databases with GPT Actions. - [Getting started with GPT Actions](getting-started-with-gpt-actions.md): Set up and test GPT Actions from scratch. - [GPT Actions](gpt-actions.md): Customize ChatGPT with GPT Actions and API integrations. - [Production notes on GPT Actions](production-notes-on-gpt-actions.md): Deploy GPT Actions in production with best practices. - [Sending and returning files with GPT Actions](sending-and-returning-files-with-gpt-actions.md): POST requests can include up to ten files (including DALL-E generated images) - [Codex agent internet access](codex-agent-internet-access.md): Codex has full internet access - [Bug with script](bug-with-script.md): Running the below script causes a 404 error: - [Codex](codex.md): Delegate tasks to a software engineering agent in the cloud. - [Install dependencies](install-dependencies.md): poetry install --with test - [Contributor Guide](contributor-guide.md): - Use pnpm dlx turbo run where to jump to a package instead of - [Deprecations](deprecations.md): Find deprecated features and recommended replacements. - [Agents](agents.md): Learn how to build agents with the OpenAI API. - [Audio and speech](audio-and-speech.md): Explore audio and speech features in the OpenAI API. - [Fetch the audio file and convert it to a base64 encoded string](fetch-the-audio-file-and-convert-it-to-a-base64-encoded-string.md): url = " - [Background mode](background-mode.md): Run long running tasks asynchronously in the background. - [Fire off an async response but also start streaming immediately](fire-off-an-async-response-but-also-start-streaming-immediately.md): stream = client.responses.create( - [If your connection drops, the response continues running and you can reconnect:](if-your-connection-drops-the-response-continues-running-and-you-can-reconnect.md) - [SDK support for resuming the stream is coming soon.](sdk-support-for-resuming-the-stream-is-coming-soon.md) - [for event in client.responses.stream(resp.id, starting_after=cursor):](for-event-in-clientresponsesstreamrespid-starting-aftercursor.md) - [print(event)](printevent.md): 1. Background sampling requires`store=true`; stateless requests are rejected. - [Batch API](batch-api.md): Process jobs asynchronously with Batch API. - [Conversation state](conversation-state.md): Learn how to manage conversation state during a model interaction. - [Add the response to the conversation](add-the-response-to-the-conversation.md): history += [{"role": el.role, "content": el.content} for el in response.output] - [Cost optimization](cost-optimization.md): Improve your efficiency and reduce costs. - [Deep research](deep-research.md): Use deep research models for complex analysis and research tasks. - [this includes a response from attacker-controlled page](this-includes-a-response-from-attacker-controlled-page.md): // The model, having seen the malicious instructions, might then make a tool call like: - [This sends the private CRM data as a query parameter to the attacker's site (evilcorp.net), resulting in exfiltration of sensitive information.](this-sends-the-private-crm-data-as-a-query-parameter-to-the-attackers-site-evilc.md): The private CRM record can now be exfiltrated to the attacker's site via the - [Direct preference optimization](direct-preference-optimization.md): Fine-tune models for subjective decision-making by comparing model outputs. - [Vector embeddings](vector-embeddings.md): Learn how to turn text into numbers, unlocking use cases like search. - [Create a t-SNE model and transform the data](create-a-t-sne-model-and-transform-the-data.md): tsne = TSNE(n_components=2, perplexity=15, random_state=42, init='random', learning_rate=200) - [Evals design best practices](evals-design-best-practices.md): Learn best practices for designing evals to test model performance in production - [Evaluating model performance](evaluating-model-performance.md): Test and improve model outputs through evaluations. - [Fine-tuning best practices](fine-tuning-best-practices.md): Learn best practices to fine-tune OpenAI models and get better peformance, - [Flex processing](flex-processing.md): Beta - [you can override the max timeout per request as well](you-can-override-the-max-timeout-per-request-as-well.md): response = client.with_options(timeout=900.0).responses.create( - [Function calling](function-calling.md): Give models access to new functionality and data they can use to follow - [1. Define a list of callable tools for the model](1-define-a-list-of-callable-tools-for-the-model.md): tools = [ - [Create a running input list we will add to over time](create-a-running-input-list-we-will-add-to-over-time.md): input_list = [ - [2. Prompt the model with tools defined](2-prompt-the-model-with-tools-defined.md): response = client.responses.create( - [Save function call outputs for subsequent requests](save-function-call-outputs-for-subsequent-requests.md): function_call = None - [3. Execute the function logic for get_horoscope](3-execute-the-function-logic-for-get-horoscope.md): result = {"horoscope": get_horoscope(function_call_arguments["sign"])} - [4. Provide function call results to the model](4-provide-function-call-results-to-the-model.md): input_list.append({ - [5. The model should be able to give a response!](5-the-model-should-be-able-to-give-a-response.md): print("Final output:") - [Graders](graders.md): Learn about graders used for evals and fine-tuning. - [get the API key from environment](get-the-api-key-from-environment.md): api_key = os.environ["OPENAI_API_KEY"] - [define a dummy grader for illustration purposes](define-a-dummy-grader-for-illustration-purposes.md): grader = { - [validate the grader](validate-the-grader.md): payload = {"grader": grader} - [run the grader with a test reference and sample](run-the-grader-with-a-test-reference-and-sample.md): payload = { - [get the API key from environment](get-the-api-key-from-environment-2.md): api_key = os.environ["OPENAI_API_KEY"] - [define a dummy grader for illustration purposes](define-a-dummy-grader-for-illustration-purposes-2.md): grader = { - [validate the grader](validate-the-grader-2.md): payload = {"grader": grader} - [run the grader with a test reference and sample](run-the-grader-with-a-test-reference-and-sample-2.md): payload = { - [get the API key from environment](get-the-api-key-from-environment-3.md): api_key = os.environ["OPENAI_API_KEY"] - [define a dummy grader for illustration purposes](define-a-dummy-grader-for-illustration-purposes-3.md): grader = { - [validate the grader](validate-the-grader-3.md): payload = {"grader": grader} - [run the grader with a test reference and sample](run-the-grader-with-a-test-reference-and-sample-3.md): payload = { - [Image generation](image-generation.md): Learn how to generate or edit images. - [Save the image to a file](save-the-image-to-a-file.md): image_data = [ - [Save the image to a file](save-the-image-to-a-file-2.md): with open("otter.png", "wb") as f: - [Follow up](follow-up.md): response_fwup = client.responses.create( - [Follow up](follow-up-2.md): response_fwup = openai.responses.create( - [Save the image to a file](save-the-image-to-a-file-3.md): with open("gift-basket.png", "wb") as f: - [Save the image to a file](save-the-image-to-a-file-4.md): with open("composition.png", "wb") as f: - [1. Load your black & white mask as a grayscale image](1-load-your-black-white-mask-as-a-grayscale-image.md): mask = Image.open(img_path_mask).convert("L") - [2. Convert it to RGBA so it has space for an alpha channel](2-convert-it-to-rgba-so-it-has-space-for-an-alpha-channel.md): mask_rgba = mask.convert("RGBA") - [3. Then use the mask itself to fill that alpha channel](3-then-use-the-mask-itself-to-fill-that-alpha-channel.md): mask_rgba.putalpha(mask) - [4. Convert the mask into bytes](4-convert-the-mask-into-bytes.md): buf = BytesIO() - [5. Save the resulting file](5-save-the-resulting-file.md): img_path_mask_alpha = "mask_alpha.png" - [Extract the edited image](extract-the-edited-image.md): image_data = [ - [Save the image to a file](save-the-image-to-a-file-5.md): with open("woman_with_logo.png", "wb") as f: - [Save the image to a file](save-the-image-to-a-file-6.md): with open("sprite.png", "wb") as f: - [Images and vision](images-and-vision.md): Learn how to understand or generate images. - [Function to encode the image](function-to-encode-the-image.md): def encode_image(image_path): - [Path to your image](path-to-your-image.md): image_path = "path_to_your_image.jpg" - [Getting the Base64 string](getting-the-base64-string.md): base64_image = encode_image(image_path) - [Function to create a file with the Files API](function-to-create-a-file-with-the-files-api.md): def create_file(file_path): - [Getting the file ID](getting-the-file-id.md): file_id = create_file("path_to_your_image.jpg") - [Latency optimization](latency-optimization.md): Improve latency across a wide variety of LLM-related use cases. - [Using GPT-5](using-gpt-5.md): Learn best practices, features, and migration guidance for GPT-5. - [Model optimization](model-optimization.md): Ensure quality model outputs with evals and fine-tuning in the OpenAI platform. - [Moderation](moderation.md): Identify potentially harmful content in text and images. - [Optimizing LLM Accuracy](optimizing-llm-accuracy.md): Maximize correctness and consistent behavior when working with LLMs. - [File inputs](file-inputs.md): Learn how to use PDF files as inputs to the OpenAI API. - [Predicted Outputs](predicted-outputs.md): Reduce latency for model responses where much of the response is known ahead of - [Priority processing](priority-processing.md): Get faster processing in the API with flexible pricing. - [Production best practices](production-best-practices.md): Transition AI projects to production with best practices. - [Prompt caching](prompt-caching.md): Reduce latency and cost with prompt caching. - [Prompt engineering](prompt-engineering.md): Enhance results with prompt engineering strategies. - [Upload a PDF we will reference in the variables](upload-a-pdf-we-will-reference-in-the-variables-2.md): file = client.files.create( - [Assume you have already uploaded the PDF and obtained FILE_ID](assume-you-have-already-uploaded-the-pdf-and-obtained-file-id-2.md): curl -H "Authorization: Bearer $OPENAI_API_KEY" -H "Content-Type: application... - [Identity](identity.md): You are coding assistant that helps enforce the use of snake case - [Instructions](instructions.md): * When defining variables, use snake case names (e.g. my_variable) - [Examples](examples.md): How do I declare a string variable for a first name? - [Identity](identity-2.md): You are a helpful assistant that labels short product reviews as - [Instructions](instructions-2.md): * Only output a single word in your response with no additional formatting - [Examples](examples-2.md): I absolutely love this headphones — sound quality is amazing! - [Prompting](prompting.md): Learn how to create prompts. - [Rate limits](rate-limits.md): Understand API rate limits and restrictions. - [imports](imports.md): import random - [define a retry decorator](define-a-retry-decorator.md): def retry_with_exponential_backoff( - [Realtime conversations](realtime-conversations.md): Beta - [... create websocket-client named ws ...](create-websocket-client-named-ws.md): def float_to_16bit_pcm(float32_array): - [Realtime transcription](realtime-transcription.md): Beta - [Voice activity detection (VAD)](voice-activity-detection-vad.md): Beta - [Realtime API](realtime-api.md): Beta - [pip install websocket-client](pip-install-websocket-client.md): import os - [Reasoning best practices](reasoning-best-practices.md): Learn when to use reasoning models and how they compare to GPT models. - [Reasoning models](reasoning-models.md): Explore advanced reasoning and problem-solving models. - [Reinforcement fine-tuning](reinforcement-fine-tuning.md): Fine-tune models for expert-level performance within a domain. - [Overview](overview.md): Evaluate the accuracy of the model-generated answer based on the Copernicus - [Note: Do not use MyCustomClass.model_json_schema() in place of](note-do-not-use-mycustomclassmodel-json-schema-in-place-of.md) - [to_strict_json_schema as it is not equivalent](to-strict-json-schema-as-it-is-not-equivalent.md): response_format = dict( - [Retrieval](retrieval.md): Search your data using semantic similarity. - [Reinforcement fine-tuning use cases](reinforcement-fine-tuning-use-cases.md): Learn use cases and best practices for reinforcement fine-tuning. - [Note this file gets uploaded to the OpenAI API as a grader](note-this-file-gets-uploaded-to-the-openai-api-as-a-grader.md): from ast_grep_py import SgRoot - [Similarity ratio helper](similarity-ratio-helper.md): def fuzz_ratio(a: str, b: str) -> float: - [Main grading entrypoint (must be named `grade`)](main-grading-entrypoint-must-be-named-grade.md): def grade(sample: dict, item: dict) -> float: - [Safety best practices](safety-best-practices.md): Implement safety measures like moderation and human oversight. - [Safety checks](safety-checks.md): Learn how OpenAI assesses for safety and how to pass safety checks. - [Speech to text](speech-to-text.md): Learn how to turn audio into text. - [PyDub handles time in milliseconds](pydub-handles-time-in-milliseconds.md): ten_minutes = 10 * 60 * 1000 - [Streaming API responses](streaming-api-responses.md): Learn how to stream model responses from the OpenAI API using server-sent - [Structured model outputs](structured-model-outputs.md): Ensure text responses from the model adhere to a JSON schema you define. - [If the model refuses to respond, you will get a refusal message](if-the-model-refuses-to-respond-you-will-get-a-refusal-message.md): if (math_reasoning.refusal): - [Supervised fine-tuning](supervised-fine-tuning.md): Fine-tune models with example inputs and known good outputs for better results - [Text to speech](text-to-speech.md): Learn how to turn text into lifelike spoken audio. - [Code Interpreter](code-interpreter.md): Allow models to write and run Python to solve problems. - [Use the returned container id in the next call:](use-the-returned-container-id-in-the-next-call.md): curl \ - [Computer use](computer-use.md): Build a computer-using agent that can perform tasks on your behalf. - [1) Install Xfce, x11vnc, Xvfb, xdotool, etc., but remove any screen lockers or power managers](1-install-xfce-x11vnc-xvfb-xdotool-etc-but-remove-any-screen-lockers-or-power-ma.md): RUN apt-get update && apt-get install -y xfce4 xfce4-goodies x11vnc xvfb xdotool imagemagick ... - [2) Add the mozillateam PPA and install Firefox ESR](2-add-the-mozillateam-ppa-and-install-firefox-esr.md): RUN add-apt-repository ppa:mozillateam/ppa && apt-get update && apt-get install -y --no-install-recommends firefox-... - [3) Create non-root user](3-create-non-root-user.md): RUN useradd -ms /bin/bash myuser && echo "myuser ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers - [4) Set x11vnc password ("secret")](4-set-x11vnc-password-secret.md): RUN x11vnc -storepasswd secret /home/myuser/.vncpass - [5) Expose port 5900 and run Xvfb, x11vnc, Xfce (no login manager)](5-expose-port-5900-and-run-xvfb-x11vnc-xfce-no-login-manager.md): EXPOSE 5900 - [Connectors and MCP servers](connectors-and-mcp-servers.md): Beta - [File search](file-search.md): Allow models to search your files for relevant information before generating a - [Replace with your own file path or URL](replace-with-your-own-file-path-or-url.md): file_id = create_file(client, " - [Image generation](image-generation-2.md): Allow models to generate or edit images. - [Save the image to a file](save-the-image-to-a-file-7.md): image_data = [ - [Follow up](follow-up-3.md): response_fwup = client.responses.create( - [Follow up](follow-up-4.md): response_fwup = openai.responses.create( - [Local shell](local-shell.md): Enable agents to run commands in a local shell. - [1) Create the initial response request with the tool enabled](1-create-the-initial-response-request-with-the-tool-enabled.md): response = client.responses.create( - [Print the assistant's final answer](print-the-assistants-final-answer.md): final_message = next( - [Web search](web-search.md): Allow models to search the web for the latest information before generating a - [Using tools](using-tools.md): Use tools like remote MCP servers or web search to extend the model's - [Vision fine-tuning](vision-fine-tuning.md): Fine-tune models for better image understanding. - [Voice agents](voice-agents.md): Learn how to build voice agents that can understand audio and respond back in - [Personality and Tone](personality-and-tone.md): // Who or what the AI represents (e.g., friendly teacher, formal advisor, helpful assistant). Be detailed and include... - [Instructions](instructions-3.md): - If a user provides a name or phone number, or something else where you need to know the exact spelling, always repe... - [Conversation States](conversation-states.md): [ - [Webhooks](webhooks.md): Use webhooks to receive real-time updates from the OpenAI API. - [will raise if the signature is invalid](will-raise-if-the-signature-is-invalid.md): event = client.webhooks.unwrap(request.data, request.headers, secret=webhook_secret) - [Data controls in the OpenAI platform](data-controls-in-the-openai-platform.md): Understand how OpenAI uses your data, and how you can control it. - [Building MCP servers for ChatGPT and API integrations](building-mcp-servers-for-chatgpt-and-api-integrations.md): Build an MCP server to use with ChatGPT connectors, deep research, or API - [Configure logging](configure-logging.md): logging.basicConfig(level=logging.INFO) - [OpenAI configuration](openai-configuration.md): OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY") - [Initialize OpenAI client](initialize-openai-client.md): openai_client = OpenAI() - [this includes a response from attacker-controlled page](this-includes-a-response-from-attacker-controlled-page-2.md): // The model, having seen the malicious instructions, might then make a tool call like: - [This sends the private CRM data as a query parameter to the attacker's site (evilcorp.net), resulting in exfiltration of sensitive information.](this-sends-the-private-crm-data-as-a-query-parameter-to-the-attackers-site-evilc-2.md): The private CRM record can now be exfiltrated to the attacker's site via the - [babbage-002](babbage-002.md): **Current Snapshot:** babbage-002 - [ChatGPT-4o](chatgpt-4o.md): **Current Snapshot:** chatgpt-4o-latest - [codex-mini-latest](codex-mini-latest.md): **Current Snapshot:** codex-mini-latest - [computer-use-preview](computer-use-preview.md): **Current Snapshot:** computer-use-preview-2025-03-11 - [DALL·E 2](dallâe-2.md): **Current Snapshot:** dall-e-2 - [DALL·E 3](dallâe-3.md): **Current Snapshot:** dall-e-3 - [davinci-002](davinci-002.md): **Current Snapshot:** davinci-002 - [gpt-3.5-turbo-16k-0613](gpt-35-turbo-16k-0613.md): **Current Snapshot:** gpt-3.5-turbo-16k-0613 - [gpt-3.5-turbo-instruct](gpt-35-turbo-instruct.md): **Current Snapshot:** gpt-3.5-turbo-instruct - [GPT-3.5 Turbo](gpt-35-turbo.md): **Current Snapshot:** gpt-3.5-turbo-0125 - [GPT-4.5 Preview (Deprecated)](gpt-45-preview-deprecated.md): **Current Snapshot:** gpt-4.5-preview-2025-02-27 - [GPT-4 Turbo Preview](gpt-4-turbo-preview.md): **Current Snapshot:** gpt-4-0125-preview - [GPT-4 Turbo](gpt-4-turbo.md): **Current Snapshot:** gpt-4-turbo-2024-04-09 - [GPT-4.1 mini](gpt-41-mini.md): **Current Snapshot:** gpt-4.1-mini-2025-04-14 - [GPT-4.1 nano](gpt-41-nano.md): **Current Snapshot:** gpt-4.1-nano-2025-04-14 - [GPT-4.1](gpt-41.md): **Current Snapshot:** gpt-4.1-2025-04-14 - [GPT-4](gpt-4.md): **Current Snapshot:** gpt-4-0613 - [GPT-4o Audio](gpt-4o-audio.md): **Current Snapshot:** gpt-4o-audio-preview-2025-06-03 - [GPT-4o mini Audio](gpt-4o-mini-audio.md): **Current Snapshot:** gpt-4o-mini-audio-preview-2024-12-17 - [GPT-4o mini Realtime](gpt-4o-mini-realtime.md): **Current Snapshot:** gpt-4o-mini-realtime-preview-2024-12-17 - [GPT-4o mini Search Preview](gpt-4o-mini-search-preview.md): **Current Snapshot:** gpt-4o-mini-search-preview-2025-03-11 - [GPT-4o mini Transcribe](gpt-4o-mini-transcribe.md): **Current Snapshot:** gpt-4o-mini-transcribe - [GPT-4o mini TTS](gpt-4o-mini-tts.md): **Current Snapshot:** gpt-4o-mini-tts - [GPT-4o mini](gpt-4o-mini.md): **Current Snapshot:** gpt-4o-mini-2024-07-18 - [GPT-4o Realtime](gpt-4o-realtime.md): **Current Snapshot:** gpt-4o-realtime-preview-2025-06-03 - [GPT-4o Search Preview](gpt-4o-search-preview.md): **Current Snapshot:** gpt-4o-search-preview-2025-03-11 - [GPT-4o Transcribe](gpt-4o-transcribe.md): **Current Snapshot:** gpt-4o-transcribe - [GPT-4o](gpt-4o.md): **Current Snapshot:** gpt-4o-2024-08-06 - [GPT-5 Chat](gpt-5-chat.md): **Current Snapshot:** gpt-5-chat-latest - [GPT-5 mini](gpt-5-mini.md): **Current Snapshot:** gpt-5-mini-2025-08-07 - [GPT-5 nano](gpt-5-nano.md): **Current Snapshot:** gpt-5-nano-2025-08-07 - [GPT-5](gpt-5.md): **Current Snapshot:** gpt-5-2025-08-07 - [GPT Image 1](gpt-image-1.md): **Current Snapshot:** gpt-image-1 - [gpt-oss-120b](gpt-oss-120b.md): **Current Snapshot:** gpt-oss-120b - [gpt-oss-20b](gpt-oss-20b.md): **Current Snapshot:** gpt-oss-20b - [o1-mini](o1-mini.md): **Current Snapshot:** o1-mini-2024-09-12 - [o1 Preview](o1-preview.md): **Current Snapshot:** o1-preview-2024-09-12 - [o1-pro](o1-pro.md): **Current Snapshot:** o1-pro-2025-03-19 - [o1](o1.md): **Current Snapshot:** o1-2024-12-17 - [o3-deep-research](o3-deep-research.md): **Current Snapshot:** o3-deep-research-2025-06-26 - [o3-mini](o3-mini.md): **Current Snapshot:** o3-mini-2025-01-31 - [o3-pro](o3-pro.md): **Current Snapshot:** o3-pro-2025-06-10 - [o3](o3.md): **Current Snapshot:** o3-2025-04-16 - [o4-mini-deep-research](o4-mini-deep-research.md): **Current Snapshot:** o4-mini-deep-research-2025-06-26 - [o4-mini](o4-mini.md): **Current Snapshot:** o4-mini-2025-04-16 - [omni-moderation](omni-moderation.md): **Current Snapshot:** omni-moderation-2024-09-26 - [text-embedding-3-large](text-embedding-3-large.md): **Current Snapshot:** text-embedding-3-large - [text-embedding-3-small](text-embedding-3-small.md): **Current Snapshot:** text-embedding-3-small - [text-embedding-ada-002](text-embedding-ada-002.md): **Current Snapshot:** text-embedding-ada-002 - [text-moderation](text-moderation.md): **Current Snapshot:** text-moderation-007 - [text-moderation-stable](text-moderation-stable.md): **Current Snapshot:** text-moderation-007 - [TTS-1 HD](tts-1-hd.md): **Current Snapshot:** tts-1-hd - [TTS-1](tts-1.md): **Current Snapshot:** tts-1 - [Whisper](whisper.md): **Current Snapshot:** whisper-1 - [Latest models](latest-models.md): **New:** Save on synchronous requests with - [Fine-tuning](fine-tuning.md): Tokens used for model grading in reinforcement fine-tuning are billed at that - [Built-in tools](built-in-tools.md): The tokens used for built-in tools are billed at the chosen model's per-token - [Transcription and speech generation](transcription-and-speech-generation.md): | Name | Input | Output | Estimated cost | Unit | - [Image generation](image-generation-3.md): Please note that this pricing for GPT Image 1 does not include text and image - [Embeddings](embeddings.md): | Name | Cost | Unit | - [Moderation](moderation-2.md): | Name | Cost | Unit | - [Other models](other-models.md): | Name | Input | Output | Unit | - [info](info.md): OpenAI API - [tags](tags.md): Assistants - [paths](paths.md): listAssistants - [webhooks](webhooks-2.md): The event payload sent by the API. - [components](components.md): object - [x-oaiMeta](x-oaimeta.md): responses