# Groqcloud > [JigsawStack](https://jigsawstack.com/) is a powerful AI SDK designed to integrate into any backend, automating tasks such as web scraping, Optical Character Recognition (OCR), translation, and more, ## Pages - [Jigsawstack](docs-jigsawstack.md): [JigsawStack](https://jigsawstack.com/) is a powerful AI SDK designed to integrate into any backend, automating tasks... - [Code Examples](scripts-code-examples.md): export const getExampleCode = ( - [Types.D](scripts-typesd.md): declare module "*.sh" { - [Rate Limits](docs-rate-limits.md): Rate limits act as control measures to regulate how frequently users and applications can access our API within speci... - [Quickstart](scripts-quickstart.md): import { Groq } from "groq-sdk"; - [Print the final content](scripts-quickstartpy.md): import json - [Wolfram‑Alpha Integration](docs-wolfram-alpha.md): Some models and systems on Groq have native support for Wolfram‑Alpha integration, allowing them to access Wolfram's ... - [Model Permissions](docs-model-permissions.md): Limit which models can be used at the organization and project level. When a request attempts to use a restricted mod... - [Overview Refresh](docs-overview-refresh.md): No content to display. - [Huggingface Basic.Py](scripts-huggingface-basicpy.md): import openai - [Stripe](scripts-stripe.md): import OpenAI from "openai"; - [Chat Completions.Py](scripts-chat-completionspy.md): import os - [Web Search Mcp](scripts-web-search-mcp.md): import OpenAI from "openai"; - [Chat Completions](scripts-chat-completions.md): import Groq from "groq-sdk"; - [Firecrawl Mcp.Py](scripts-firecrawl-mcppy.md): import openai - [Stripe.Py](scripts-stripepy.md): import openai - [Firecrawl Mcp](scripts-firecrawl-mcp.md): import OpenAI from "openai"; - [Web Search Mcp.Py](scripts-web-search-mcppy.md): import openai - [Huggingface Basic](scripts-huggingface-basic.md): import OpenAI from "openai"; - [Model Context Protocol (MCP)](docs-mcp.md): [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) is an open-source standard that enables AI applicati... - [Quickstart](scripts-quickstart-2.md): import { Groq } from "groq-sdk"; - [Print the final content](scripts-quickstartpy-2.md): import json - [Visit Website](docs-visit-website.md): Some models and systems on Groq have native support for visiting and analyzing specific websites, allowing them to ac... - [Flutterflow](docs-flutterflow.md): [**FlutterFlow**](https://flutterflow.io/) is a visual development platform to build high-quality, custom, cross-plat... - [Billing FAQs](docs-billing-faqs.md): When you upgrade, **there's no immediate charge** - you'll be billed for tokens at month-end or when you reach progre... - [Get Models](scripts-get-models.md): import Groq from "groq-sdk"; - [Get Models.Py](scripts-get-modelspy.md): import requests - [Supported Models](docs-models.md): Explore all available models on GroqCloud. - [Featured Cards](models-featured-cards.md): The following are some featured cards showcasing various AI systems. - [Models](models-models.md): The following table lists available models, their speeds, and pricing. - [Projects](docs-projects.md): Projects provide organizations with a powerful framework for managing multiple applications, environments, and teams ... - [Qwen3 32B](model-qwen3-32b.md): No content to display. - [Groq Hosted Models: DeepSeek-R1-Distill-Qwen-32B](model-deepseek-r1-distill-qwen-32b.md): DeepSeek-R1-Distill-Qwen-32B is a distilled version of DeepSeek's R1 model, fine-tuned from the Qwen-2.5-32B base mod... - [Llama Prompt Guard 2 86M](model-llama-prompt-guard-2-86m.md): No content to display. - [Llama Prompt Guard 2 86M](meta-llama-llama-prompt-guard-2-86m.md): * **Model Architecture**: Built upon Microsoft's mDeBERTa-base architecture, this 86M parameter model is specifically... - [Llama Prompt Guard 2 22M](meta-llama-llama-prompt-guard-2-22m.md): * Model Architecture: Built upon Microsoft's DeBERTa-xsmall architecture, this 22M parameter model is specifically fi... - [Llama 4 Scout 17B 16E Instruct](meta-llama-llama-4-scout-17b-16e-instruct.md): meta-llama/llama-4-scout-17b-16e-instruct, or Llama 4 Scout, is Meta's 17 billion parameter mixture-of-experts model ... - [Llama 4 Maverick 17B 128E Instruct](meta-llama-llama-4-maverick-17b-128e-instruct.md): meta-llama/llama-4-maverick-17b-128e-instruct, or Llama 4 Maverick, is Meta's 17 billion parameter mixture-of-experts... - [Llama Guard 4 12B](meta-llama-llama-guard-4-12b.md): * Model Architecture: Built upon Meta's Llama 4 Scout architecture, the model is comprised of 12 billion parameters a... - [Qwen 3 32B](qwen-qwen3-32b.md): Qwen 3 32B is the latest generation of large language models in the Qwen series, offering groundbreaking advancements... - [Whisper Large V3](model-whisper-large-v3.md): - **Model Architecture**: Built on OpenAI's transformer-based encoder-decoder architecture with 1550M parameters. The... - [Whisper Large V3 Turbo](model-whisper-large-v3-turbo.md): - **Model Size**: Optimized architecture for speed - [Llama 3.3 70B Versatile](model-llama-33-70b-versatile.md): Llama-3.3-70B-Versatile is Meta's advanced multilingual large language model, optimized for a wide range of natural l... - [Llama3 70B 8192](model-llama3-70b-8192.md): Llama 3.0 70B on Groq offers a balance of performance and speed as a reliable foundation model that excels at dialogu... - [Distil Whisper Large V3 En](model-distil-whisper-large-v3-en.md): - **Model Architecture**: Built on the encoder-decoder transformer architecture inherited from Whisper, with optimize... - [Llama3 8B 8192](model-llama3-8b-8192.md): Llama-3-8B-8192 delivers exceptional performance with industry-leading speed and cost-efficiency on Groq hardware. Th... - [Gpt Oss 20B](openai-gpt-oss-20b.md): * Model Architecture - [Gpt Oss 120B](openai-gpt-oss-120b.md): * Model Architecture - [Prompt Injection](scripts-prompt-injection.md): import Groq from "groq-sdk"; - [Prompt Injection.Py](scripts-prompt-injectionpy.md): from groq import Groq - [Gpt Oss Safeguard 20B](openai-gpt-oss-safeguard-20b.md): Built on the GPT-OSS architecture with 20B total parameters. Fine-tuned specifically for safety classification tasks ... - [Mistral Saba 24B](model-mistral-saba-24b.md): Mistral Saba 24B is a specialized model trained to excel in Arabic, Farsi, Urdu, Hebrew, and Indic languages. With a ... - [Llama Prompt Guard 2 22M](model-llama-prompt-guard-2-22m.md): No content to display. - [Llama 4 Scout 17B 16E Instruct](model-llama-4-scout-17b-16e-instruct.md): No content to display. - [Llama 3.3 70B Specdec](model-llama-33-70b-specdec.md): Llama-3.3-70B-SpecDec is Groq's speculative decoding version of Meta's Llama 3.3 70B model, optimized for high-speed ... - [Llama 4 Maverick 17B 128E Instruct](model-llama-4-maverick-17b-128e-instruct.md): No content to display. - [Allam 2 7B](model-allam-2-7b.md): * Model Architecture - [Deepseek R1 Distill Llama 70B](model-deepseek-r1-distill-llama-70b.md): DeepSeek-R1-Distill-Llama-70B is a distilled version of DeepSeek's R1 model, fine-tuned from the Llama-3.3-70B-Instru... - [Qwen 2.5 Coder 32B](model-qwen-25-coder-32b.md): Qwen-2.5-Coder-32B is a specialized version of Qwen-2.5-32B, fine-tuned specifically for code generation and developm... - [Llama 3.2 1B Preview](model-llama-32-1b-preview.md): LLaMA-3.2-1B-Preview is one of the fastest models on Groq, making it perfect for cost-sensitive, high-throughput appl... - [Playai Tts Arabic](model-playai-tts-arabic.md): The model was trained on millions of audio samples with diverse characteristics: - [Llama 3.2 3B Preview](model-llama-32-3b-preview.md): LLaMA-3.2-3B-Preview is one of the fastest models on Groq, offering a great balance of speed and generation quality. ... - [Qwen Qwq 32B](model-qwen-qwq-32b.md): Qwen/Qwq-32B is a 32-billion parameter reasoning model delivering competitive performance against state-of-the-art mo... - [Gemma2 9B It](model-gemma2-9b-it.md): * Model Architecture - [Llama Guard 4 12B](model-llama-guard-4-12b.md): No content to clean. - [Llama Guard 3 8B](model-llama-guard-3-8b.md): Llama-Guard-3-8B, a specialized content moderation model built on the Llama framework, excels at identifying and filt... - [Kimi K2 Instruct 0905](moonshotai-kimi-k2-instruct-0905.md): Built on a Mixture-of-Experts (MoE) architecture with 1 trillion total parameters and 32 billion activated parameters... - [Kimi K2 Instruct](moonshotai-kimi-k2-instruct.md): This model currently redirects to the latest [0905 version](/docs/model/moonshotai/kimi-k2-instruct-0905), which offe... - [Qwen-2.5-32B](model-qwen-25-32b.md): Qwen-2.5-32B is Alibaba's flagship model, delivering near-instant responses with GPT-4 level capabilities across a wi... - [Playai Tts](model-playai-tts.md): PlayAI Dialog v1.0 is based on a transformer architecture optimized for high-quality speech output. The model support... - [Llama 3.1 8B Instant](model-llama-31-8b-instant.md): llama-3.1-8b-instant on Groq offers rapid response times with production-grade reliability, suitable for latency-sens... - [Compound Beta](agentic-tooling-compound-beta.md): No content to display. - [Agentic Tooling](docs-agentic-tooling.md): No content to display. - [Compound Beta Mini](agentic-tooling-compound-beta-mini.md): No content to display. - [Compound](groq-compound.md): No content to display. - [Compound Mini](groq-compound-mini.md): No content to display. - [Ai Sdk](docs-ai-sdk.md): Vercel's AI SDK enables seamless integration with Groq, providing developers with powerful tools to leverage language... - [Parallel](docs-parallel.md): [Parallel](https://parallel.ai) provides a web search MCP server that gives AI models access to real-time web data. C... - [Tavily](docs-tavily.md): [Tavily](https://tavily.com) is a comprehensive web search, scraping, and crawling API designed specifically for AI a... - [Openai Compat.Py](scripts-openai-compatpy.md): import os - [Openai Compat](scripts-openai-compat.md): import OpenAI from "openai"; - [Configure](docs-autogen.md): [AutoGen](https://microsoft.github.io/autogen/) developed by [Microsoft Research](https://www.microsoft.com/research/... - [Llamaguard Chat Completion.Json](scripts-llamaguard-chat-completionjson.md): { - [Llamaguard Chat Completion.Py](scripts-llamaguard-chat-completionpy.md): import os - [Llamaguard Chat Completion](scripts-llamaguard-chat-completion.md): import Groq from "groq-sdk"; - [Content Moderation](docs-content-moderation.md): User prompts can sometimes include harmful, inappropriate, or policy-violating content that can be used to exploit mo... - [Quickstart](scripts-quickstart-3.md): import { Groq } from "groq-sdk"; - [Print the final content](scripts-quickstartpy-3.md): import json - [Browser Automation](docs-browser-automation.md): Some models and systems on Groq have native support for advanced browser automation, allowing them to launch and cont... - [Understanding and Optimizing Latency on Groq](production-readiness-optimizing-latency.md): Latency is a critical factor when building production applications with Large Language Models (LLMs). This guide help... - [Security Onboarding](production-readiness-security-onboarding.md): Welcome to the **Groq Security Onboarding** guide. - [Production-Ready Checklist for Applications on GroqCloud](production-readiness-production-ready-checklist.md): Deploying LLM applications to production involves critical decisions that directly impact user experience, operationa... - [Performing Chat Completion.Json](scripts-performing-chat-completionjson.md): { - [Quickstart Ai Sdk](scripts-quickstart-ai-sdk.md): import Groq from "groq-sdk"; - [Performing Chat Completion.Py](scripts-performing-chat-completionpy.md): import os - [Performing Chat Completion](scripts-performing-chat-completion.md): import Groq from "groq-sdk"; - [Quickstart](docs-quickstart.md): Get up and running with the Groq API in a few minutes, with the steps below. - [Email Classification.Py](scripts-email-classificationpy.md): from groq import Groq - [Sql Query Generation](scripts-sql-query-generation.md): import Groq from "groq-sdk"; - [File System Schema.Json](scripts-file-system-schemajson.md): { - [Appointment Booking Schema.Json](scripts-appointment-booking-schemajson.md): { - [Task Creation Schema.Json](scripts-task-creation-schemajson.md): { - [Support Ticket Zod.Doc](scripts-support-ticket-zoddoc.md): import Groq from "groq-sdk"; - [Email Classification Response.Json](scripts-email-classification-responsejson.md): { - [Step2 Example.Py](scripts-step2-examplepy.md): from groq import Groq - [Api Response Validation.Py](scripts-api-response-validationpy.md): from groq import Groq - [Api Response Validation](scripts-api-response-validation.md): import Groq from "groq-sdk"; - [Api Response Validation Response.Json](scripts-api-response-validation-responsejson.md): { - [Support Ticket Pydantic.Py](scripts-support-ticket-pydanticpy.md): from groq import Groq - [Sql Query Generation.Py](scripts-sql-query-generationpy.md): from groq import Groq - [Project Milestones Schema.Json](scripts-project-milestones-schemajson.md): { - [Json Object Mode](scripts-json-object-mode.md): import { Groq } from "groq-sdk"; - [Product Review](scripts-product-review.md): import Groq from "groq-sdk"; - [Json Object Mode.Py](scripts-json-object-modepy.md): from groq import Groq - [Email Classification](scripts-email-classification.md): import Groq from "groq-sdk"; - [Product Review.Py](scripts-product-reviewpy.md): from groq import Groq - [Payment Method Schema.Json](scripts-payment-method-schemajson.md): { - [Step2 Example](scripts-step2-example.md): import Groq from "groq-sdk"; - [Organization Chart Schema.Json](scripts-organization-chart-schemajson.md): { - [Structured Outputs](docs-structured-outputs.md): Guarantee model responses strictly conform to your JSON schema for reliable, type-safe data structures. - [Translation](scripts-translation.md): import fs from "fs"; - [Initialize the Groq client](scripts-transcriptionpy.md): import os - [Transcription](scripts-transcription.md): import fs from "fs"; - [Initialize the Groq client](scripts-translationpy.md): import os - [Speech to Text](docs-speech-to-text.md): Groq API is designed to provide fast speech-to-text solution available, offering OpenAI-compatible endpoints that - [Initialize the agent with an LLM via Groq and DuckDuckGoTools](docs-agno.md): [Agno](https://github.com/agno-agi/agno) is a lightweight framework for building multi-modal Agents. It's easy to use... - [Litellm](docs-litellm.md): LiteLLM provides a simple framework with features to help productionize your application infrastructure, including: - [English.Py](scripts-englishpy.md): import os - [English](scripts-english.md): import fs from "fs"; - [Text to Speech](docs-text-to-speech.md): Learn how to instantly generate lifelike audio from text. - [Prometheus Metrics](docs-prometheus-metrics.md): [Prometheus](https://prometheus.io/) is an open-source monitoring system that collects and stores metrics as time ser... - [Create Batch Job](scripts-create-batch-job.md): import Groq from 'groq-sdk'; - [Initial request - gets first page of batches](scripts-list-batchespy.md): import os - [Retrieve](scripts-retrieve.md): import fs from 'fs'; - [Retrieve.Py](scripts-retrievepy.md): import os - [List Batches](scripts-list-batches.md): import Groq from 'groq-sdk'; - [Create Batch Job.Py](scripts-create-batch-jobpy.md): import os - [Status.Py](scripts-statuspy.md): import os - [Upload File](scripts-upload-file.md): import fs from 'fs'; - [Status](scripts-status.md): import Groq from 'groq-sdk'; - [Multi Batch Status](scripts-multi-batch-status.md): async function main() { - [Set up headers](scripts-multi-batch-statuspy.md): import os - [Upload File.Py](scripts-upload-filepy.md): import os - [Groq Batch API](docs-batch.md): Process large-scale workloads asynchronously with our Batch API. - [Legacy Changelog](docs-legacy-changelog.md): Welcome to the Groq Changelog, where you can follow ongoing developments to our API. - [Configure environment variables for Phoenix](docs-arize.md): [Arize Phoenix](https://docs.arize.com/phoenix) developed by [Arize AI](https://arize.com/) is an open-source AI obse... - [Structured Outputs.Py](scripts-structured-outputspy.md): import os - [Structured Outputs](scripts-structured-outputs.md): import OpenAI from "openai"; - [Structured Outputs Zod](scripts-structured-outputs-zod.md): import OpenAI from "openai"; - [Code Interpreter](scripts-code-interpreter.md): import OpenAI from "openai"; - [Code Interpreter.Py](scripts-code-interpreterpy.md): import openai - [Structured Outputs Pydantic.Py](scripts-structured-outputs-pydanticpy.md): import os - [Reasoning.Py](scripts-reasoningpy.md): import openai - [Multi Turn.Py](scripts-multi-turnpy.md): import os - [Reasoning](scripts-reasoning.md): import OpenAI from "openai"; - [Quickstart](scripts-quickstart-4.md): import OpenAI from "openai"; - [Multi Turn](scripts-multi-turn.md): import OpenAI from "openai"; - [Images.Py](scripts-imagespy.md): import os - [Web Search](scripts-web-search.md): import OpenAI from "openai"; - [Quickstart.Py](scripts-quickstartpy-4.md): import openai - [Web Search.Py](scripts-web-searchpy.md): import openai - [Images](scripts-images.md): import OpenAI from "openai"; - [Responses API](docs-responses-api.md): Groq's Responses API is fully compatible with OpenAI's Responses API, making it easy to integrate advanced conversati... - [Vision](scripts-vision.md): import { Groq } from 'groq-sdk'; - [Jsonmode.Py](scripts-jsonmodepy.md): from groq import Groq - [Vision.Json](scripts-visionjson.md): { - [Function to encode the image](scripts-localpy.md): from groq import Groq - [Vision.Py](scripts-visionpy.md): from groq import Groq - [Multiturn.Py](scripts-multiturnpy.md): from groq import Groq - [Images and Vision](docs-vision.md): Groq API offers fast inference and low latency for multimodal models with vision capabilities for understanding and i... - [Example1.Py](scripts-example1py.md): from groq import Groq - [Example1](scripts-example1.md): import { Groq } from 'groq-sdk'; - [Example2](scripts-example2.md): import { Groq } from 'groq-sdk'; - [Example2.Json](scripts-example2json.md): { - [Example2.Py](scripts-example2py.md): from groq import Groq - [Example1.Json](scripts-example1json.md): { - [Assistant Message Prefilling](docs-prefilling.md): When using Groq API, you can have more control over your model output by prefilling`assistant`messages. This techni... - [OpenAI Compatibility](docs-openai.md): We designed Groq API to be mostly compatible with OpenAI's client libraries, making it easy to - [Multi Turn Conversations](scripts-multi-turn-conversations.md): import Groq from "groq-sdk"; - [Tool Definitions And Use](scripts-tool-definitions-and-use.md): import Groq from "groq-sdk"; - [Multi Turn Conversations.Py](scripts-multi-turn-conversationspy.md): import os - [Large Prompts And Context.Py](scripts-large-prompts-and-contextpy.md): from groq import Groq - [Large Prompts And Context](scripts-large-prompts-and-context.md): import Groq from "groq-sdk"; - [Define comprehensive tool set](scripts-tool-definitions-and-usepy.md): from groq import Groq - [Prompt Caching](docs-prompt-caching.md): Model prompts often contain repetitive content, such as system prompts and tool definitions. - [Firecrawl](docs-firecrawl.md): [Firecrawl](https://firecrawl.dev) is an enterprise-grade web scraping platform that turns any website into clean, AI... - [Natural Language.Doc](scripts-natural-languagedoc.md): import Groq from "groq-sdk"; - [Log the tools that were used to generate the response](scripts-executed-toolspy.md): import os - [Fact Checker.Doc](scripts-fact-checkerdoc.md): import Groq from "groq-sdk"; - [Version.Py](scripts-versionpy.md): from groq import Groq - [Example 1: Error Explanation (might trigger search)](scripts-code-debuggerpy.md): import os - [Example 1: Calculation](scripts-natural-languagepy.md): import os - [Fact Checker](scripts-fact-checker.md): import Groq from "groq-sdk"; - [Usage](scripts-usage.md): import Groq from "groq-sdk"; - [Code Debugger.Doc](scripts-code-debuggerdoc.md): import Groq from "groq-sdk"; - [Code Debugger](scripts-code-debugger.md): import Groq from "groq-sdk"; - [Usage.Doc](scripts-usagedoc.md): import Groq from "groq-sdk"; - [Version](scripts-version.md): import { Groq } from "groq-sdk"; - [Executed Tools.Doc](scripts-executed-toolsdoc.md): import Groq from 'groq-sdk'; - [Ensure your GROQ_API_KEY is set as an environment variable](scripts-fact-checkerpy.md): import os - [Print all tool calls](scripts-usagepy.md): from groq import Groq - [Natural Language](scripts-natural-language.md): import Groq from "groq-sdk"; - [Executed Tools](scripts-executed-tools.md): import Groq from 'groq-sdk'; - [Enable Specific Tools.Py](scripts-enable-specific-toolspy.md): from groq import Groq - [Code Execution Only.Py](scripts-code-execution-onlypy.md): from groq import Groq - [Code Execution Only](scripts-code-execution-only.md): import Groq from "groq-sdk"; - [Enable Specific Tools](scripts-enable-specific-tools.md): import Groq from "groq-sdk"; - [Built-in Tools](compound-built-in-tools.md): Compound systems come equipped with a comprehensive set of built-in tools that can be intelligently called to answer ... - [Compound](docs-compound.md): While LLMs excel at generating text, Groq's Compound systems take the next step. - [Use Cases](compound-use-cases.md): Groq's compound systems excel at a wide range of use cases, particularly when real-time information is required. - [Search Settings](compound-search-settings.md): No content to display. - [Systems](compound-systems.md): Groq offers two compound AI systems that intelligently use external tools to provide more accurate, up-to-date, and c... - [Compound Beta Mini](systems-compound-beta-mini.md): No content to display. - [Compound](systems-compound.md): * **Model Architecture**: Compound is powered by [Llama 4 Scout](/docs/model/meta-llama/llama-4-scout-17b-16e-instr... - [Compound Mini](systems-compound-mini.md): Compound mini is powered by Llama 3.3 70B and GPT-OSS 120B for intelligent reasoning and tool use. Unlike groq/compou... - [Initialize Groq client](docs-e2b.md): [E2B](https://e2b.dev/) Code Interpreter is an open-source SDK that provides secure, sandboxed environments for execu... - [Initialize the Anchor Browser Client](docs-anchorbrowser.md): [Anchor Browser](https://anchorbrowser.io?utm_source=groq) is the platform for AI Agentic browser automation, which s... - [Initialize Groq client](docs-gradio.md): [Gradio](https://www.gradio.app/) is a powerful library for creating web interfaces for your applications that enable... - [Browseruse](docs-browseruse.md): [Browser Use](https://browser-use.com) enables AI models to autonomously browse the web and extract information throu... - [Huggingface](docs-huggingface.md): [HuggingFace](https://huggingface.co) hosts over 500,000 models and 100,000 datasets. Combined with HuggingFace's MCP... - [Initialize Groq client](scripts-parallelpy.md): import json - [Routing.Doc](scripts-routingdoc.md): import Groq from "groq-sdk"; - [Initialize the Groq client](scripts-step1py.md): from groq import Groq - [Parallel](scripts-parallel.md): import Groq from "groq-sdk"; - [imports calculate function from step 1](scripts-step2py.md): def run_conversation(user_prompt): - [Step2.Doc](scripts-step2doc.md): // imports calculate function from step 1 - [Initialize the Groq client](scripts-routingpy.md): from groq import Groq - [Step1](scripts-step1.md): import { Groq } from 'groq-sdk'; - [Define the tool schema](scripts-instructorpy.md): import instructor - [Parallel.Doc](scripts-paralleldoc.md): import Groq from "groq-sdk"; - [Routing](scripts-routing.md): import Groq from "groq-sdk"; - [Step1.Doc](scripts-step1doc.md): import { Groq } from 'groq-sdk'; - [Step2](scripts-step2.md): // imports calculate function from step 1 - [Introduction to Tool Use](docs-tool-use.md): Tool use is a powerful feature that allows Large Language Models (LLMs) to interact with external resources, such as ... - [Gcp Private Service Connect](security-gcp-private-service-connect.md): Private Service Connect (PSC) enables you to access Groq's API services through private network connections, eliminat... - [Reasoning Hidden](scripts-reasoning-hidden.md): import { Groq } from 'groq-sdk'; - [Reasoning Raw](scripts-reasoning-raw.md): import { Groq } from 'groq-sdk'; - [Reasoning Gpt Oss High.Py](scripts-reasoning-gpt-oss-highpy.md): from groq import Groq - [Reasoning Hidden.Py](scripts-reasoning-hiddenpy.md): from groq import Groq - [Reasoning Gpt Oss Excl.Py](scripts-reasoning-gpt-oss-exclpy.md): from groq import Groq - [Reasoning Gpt Oss.Py](scripts-reasoning-gpt-osspy.md): from groq import Groq - [Reasoning Gpt Oss Excl](scripts-reasoning-gpt-oss-excl.md): import { Groq } from 'groq-sdk'; - [Reasoning Raw.Py](scripts-reasoning-rawpy.md): from groq import Groq - [Reasoning Parsed](scripts-reasoning-parsed.md): import { Groq } from 'groq-sdk'; - [Reasoning Gpt Oss High](scripts-reasoning-gpt-oss-high.md): import { Groq } from 'groq-sdk'; - [Reasoning Parsed.Py](scripts-reasoning-parsedpy.md): from groq import Groq - [R1.Py](scripts-r1py.md): from groq import Groq - [R1](scripts-r1.md): import Groq from 'groq-sdk'; - [Reasoning Gpt Oss](scripts-reasoning-gpt-oss.md): import { Groq } from 'groq-sdk'; - [Reasoning](docs-reasoning.md): Reasoning models excel at complex problem-solving tasks that require step-by-step analysis, logical deduction, and st... - [Your Data in GroqCloud](docs-your-data.md): Understand how Groq uses customer data and the controls you have. - [Quickstart](scripts-quickstart-5.md): import { Groq } from 'groq-sdk'; - [Quickstart.Py](scripts-quickstartpy-5.md): from groq import Groq - [Browser Search](docs-browser-search.md): Some models on Groq have built-in support for interactive browser search, providing a more comprehensive approach to ... - [Button Group](integrations-button-group.md): The button group is a collection of buttons that are displayed together. - [What are integrations?](docs-integrations.md): Integrations are a way to connect your application to external services and enhance your Groq-powered applications wi... - [Integration Buttons](integrations-integration-buttons.md): import type { IntegrationButton } from "./button-group"; - [Initialize Groq LLM](docs-langchain.md): While you could use the Groq SDK directly, [LangChain](https://www.langchain.com/) is a framework that makes it easy ... - [Xrx](docs-xrx.md): [xRx](https://github.com/8090-inc/xrx-core) is an open-source framework for building AI-powered applications that int... - [Llama Index](docs-llama-index.md): [LlamaIndex](https://www.llamaindex.ai/) is a data framework for LLM-based applications that benefit from context aug... - [Deploy to Vercel](docs-mastra.md): [Mastra](https://mastra.ai) is a TypeScript framework for building production-ready AI applications with agents, work... - [Initialize Large Language Model (LLM) of your choice (see all models on our Models page)](docs-crewai.md): CrewAI is a framework that enables the orchestration of multiple AI agents with specific roles, tools, and goals as a... - [Spend Limits](docs-spend-limits.md): Control your API costs with automated spending limits and proactive usage alerts when approaching budget thresholds. - [API Error Codes and Responses](docs-errors.md): Our API uses standard HTTP response status codes to indicate the success or failure of an API request. In cases of er... - [Toolhouse](docs-toolhouse.md): [Toolhouse](https://toolhouse.ai) is the first Backend-as-a-Service for the agentic stack. Toolhouse allows you to de... - [Example1.Py](scripts-example1py-2.md): import os - [Example1](scripts-example1-2.md): const GROQ_API_KEY = process.env.GROQ_API_KEY; - [Example1.Json](scripts-example1json-2.md): { - [Flex Processing](docs-flex-processing.md): Flex Processing is a service tier optimized for high-throughput workloads that prioritizes fast inference and can han... - [Data model for LLM to generate](scripts-json-modepy.md): from typing import List, Optional - [System Prompt](scripts-system-prompt.md): import { Groq } from "groq-sdk"; - [Define a schema with Pydantic (Python's equivalent to Zod)](scripts-basic-validation-zodpy.md): import os - [Basic Validation Zod.Doc](scripts-basic-validation-zoddoc.md): import { Groq } from "groq-sdk"; - [Complex Schema Example](scripts-complex-schema-example.md): import Instructor from "@instructor-ai/instructor"; // npm install @instructor-ai/instructor - [Basic Validation Zod](scripts-basic-validation-zod.md): import { Groq } from "groq-sdk"; - [Streaming Chat Completion](scripts-streaming-chat-completion.md): import Groq from "groq-sdk"; - [Streaming Chat Completion With Stop](scripts-streaming-chat-completion-with-stop.md): import Groq from "groq-sdk"; - [Set up instructor with Groq](scripts-instructor-examplepy.md): import os - [Set your API key](scripts-prompt-engineeringpy.md): import os - [Prompt Engineering.Doc](scripts-prompt-engineeringdoc.md): import { Groq } from "groq-sdk"; - [Set up the client with instructor](scripts-complex-schema-examplepy.md): import os - [Complex Schema Example.Doc](scripts-complex-schema-exampledoc.md): import Instructor from "@instructor-ai/instructor"; // npm install @instructor-ai/instructor - [Basic Chat Completion](scripts-basic-chat-completion.md): import Groq from "groq-sdk"; - [Instructor Example.Doc](scripts-instructor-exampledoc.md): import Instructor from "@instructor-ai/instructor"; // npm install @instructor-ai/instructor - [Instructor Example](scripts-instructor-example.md): import Instructor from "@instructor-ai/instructor"; // npm install @instructor-ai/instructor - [Json Mode](scripts-json-mode.md): import Groq from "groq-sdk"; - [Print the incremental deltas returned by the LLM.](scripts-streaming-chat-completionpy.md): from groq import Groq - [Performing Async Chat Completion.Py](scripts-performing-async-chat-completionpy.md): import asyncio - [System Prompt.Py](scripts-system-promptpy.md): from groq import Groq - [Prompt Engineering](scripts-prompt-engineering.md): import { Groq } from "groq-sdk"; - [Print the completion returned by the LLM.](scripts-streaming-chat-completion-with-stoppy.md): from groq import Groq - [Streaming Async Chat Completion.Py](scripts-streaming-async-chat-completionpy.md): import asyncio - [Print the completion returned by the LLM.](scripts-basic-chat-completionpy.md): from groq import Groq - [Text Generation](docs-text-chat.md): Generating text with Groq's Chat Completions API enables you to have natural, conversational interactions with Groq's... - [Browserbase](docs-browserbase.md): [BrowserBase](https://browserbase.com) provides cloud-based headless browser infrastructure that makes browser automa... - [Livekit](docs-livekit.md): [LiveKit](https://livekit.io) complements Groq's high-performance speech recognition capabilities by providing text-t... - [Chat.Json](scripts-chatjson.md): { - [Chat.Py](scripts-chatpy.md): from groq import Groq - [Chat](scripts-chat.md): // Default - [Content](overview-content.md): Fast LLM inference, OpenAI-compatible. Simple to integrate, easy to scale. Start building in minutes. - [Connect GitHub (you'll be guided through OAuth flow to get things going)](docs-composio.md): [Composio](https://composio.ai/) is a platform for managing and integrating tools with LLMs and AI agents. You can bu... - [Prompt Engineering Patterns Guide](prompting-patterns.md): This guide provides a systematic approach to selecting appropriate prompt patterns for various tasks when working wit... - [Seed](scripts-seed.md): import { Groq } from "groq-sdk" - [Roles](scripts-roles.md): import Groq from "groq-sdk"; - [Roles.Py](scripts-rolespy.md): from groq import Groq - [Using a custom stop sequence for structured, concise output.](scripts-stoppy.md): from groq import Groq - [Seed.Py](scripts-seedpy.md): from groq import Groq - [Stop](scripts-stop.md): // Using a custom stop sequence for structured, concise output. - [Prompt Basics](docs-prompting.md): Prompting is the methodology through which we communicate instructions, parameters, and expectations to large languag... - [Model Migration Guide](prompting-model-migration.md): Migrating prompts from commercial models (GPT, Claude, Gemini) to open-source ones like Llama often requires explicit... - [Exa](docs-exa.md): [Exa](https://exa.ai) is an AI-native search engine built specifically for LLMs. Unlike keyword-based search, Exa und... - [Quickstart](scripts-quickstart-6.md): import Groq from "groq-sdk"; - [Final output](scripts-quickstartpy-6.md): from groq import Groq - [Web Search](docs-web-search.md): Some models and systems on Groq have native support for access to real-time web content, allowing them to answer ques... - [Countries](web-search-countries.md): export const countries = [ - [Calculation](scripts-calculation.md): import Groq from "groq-sdk"; - [Debugging.Py](scripts-debuggingpy.md): import os - [Debugging](scripts-debugging.md): import Groq from "groq-sdk"; - [Final output](scripts-gpt-oss-quickstartpy.md): from groq import Groq - [Quickstart](scripts-quickstart-7.md): import Groq from "groq-sdk"; - [Calculation.Py](scripts-calculationpy.md): import os - [Final output](scripts-quickstartpy-7.md): import os - [Gpt Oss Quickstart](scripts-gpt-oss-quickstart.md): import Groq from "groq-sdk"; - [Code Execution](docs-code-execution.md): Some models and systems on Groq have native support for automatic code execution, allowing them to perform calculatio... - [The Groq integration is available in mlflow >= 2.20.0](docs-mlflow.md): [MLflow](https://mlflow.org/) is an open-source platform developed by Databricks to assist in building better Generat... - [Feedback Policy](docs-feedback-policy.md): Your feedback is essential to making GroqCloud and our products safer, more reliable, and more useful. This page expl... - [LoRA Inference on Groq](docs-lora.md): Groq provides inference services for pre-made Low-Rank Adaptation (LoRA) adapters. LoRA is a Parameter-efficient Fine... - [Library Usage Response.Json](scripts-library-usage-responsejson.md): { - [Library Usage](scripts-library-usage.md): import Groq from "groq-sdk"; - [Library Usage.Py](scripts-library-usagepy.md): import os - [Groq Client Libraries](docs-libraries.md): Groq provides both a Python and JavaScript/Typescript client library.