# Litellm > Documentation for Litellm ## Pages - [Docs 10 Documentation](docs-10-documentation.md) - [https://docs.litellm.ai/ llms-full.txt](httpsdocslitellmai-llms-fulltxt.md) - [LiteLLM - Getting Started](litellm-getting-started.md): - Translate inputs to provider's`completion`,`embedding`, and`image_generation`endpoints - [auth: run 'gcloud auth application-default'](auth-run-gcloud-auth-application-default.md): os.environ["VERTEX_PROJECT"] = "hardy-device-386718" - [e.g. Call 'WizardLM/WizardCoder-Python-34B-V1.0' hosted on HF Inference endpoints](eg-call-wizardlmwizardcoder-python-34b-v10-hosted-on-hf-inference-endpoints.md): response = completion( - [azure call](azure-call.md): response = completion( - [auth: run 'gcloud auth application-default'](auth-run-gcloud-auth-application-default-2.md): os.environ["VERTEX_PROJECT"] = "hardy-device-386718" - [e.g. Call 'WizardLM/WizardCoder-Python-34B-V1.0' hosted on HF Inference endpoints](eg-call-wizardlmwizardcoder-python-34b-v10-hosted-on-hf-inference-endpoints-2.md): response = completion( - [azure call](azure-call-2.md): response = completion( - [set callbacks](set-callbacks.md): litellm.success_callback = ["lunary", "mlflow", "langfuse", "helicone"] # log input/output to lunary, mlflow, langfus... - [track_cost_callback](track-cost-callback.md): def track_cost_callback( - [set callback](set-callback.md): litellm.success_callback = [track_cost_callback] # set custom callback function - [litellm.completion() call](litellmcompletion-call.md): response = completion( - [request sent to model set on litellm proxy, `litellm --model`](request-sent-to-model-set-on-litellm-proxy-litellm-model.md): response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [\ - [Completion Function - completion()](completion-function-completion.md): The Input params are **exactly the same** as the - [Request Body](request-body.md): **Required Fields** - [Completion Function - completion()](completion-function-completion-2.md): Here's the exact json output you can expect from a litellm`completion`call: - [Generation/Completion/Chat Completion Models](generationcompletionchat-completion-models.md): | Model Name | Function Call | Required OS Variables | - [Contact Us](contact-us.md): - [Meet with us 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version) - [Contributing to Documentation](contributing-to-documentation.md): Clone litellm - [Embedding Models](embedding-models.md): | Model Name | Function Call | Required OS Variables | - [Tutorial Intro](tutorial-intro.md): Let's discover **Docusaurus in less than 5 minutes**. - [Callbacks](callbacks.md): liteLLM provides`success_callbacks`and`failure_callbacks`, making it easy for you to send data to a particular pro... - [set callbacks](set-callbacks-2.md): litellm.success_callback=["posthog", "helicone", "lunary"] - [Helicone Tutorial](helicone-tutorial.md): [Helicone](https://helicone.ai/) is an open source observability platform that proxies your OpenAI traffic and provid... - [set callbacks](set-callbacks-3.md): litellm.success_callback=["helicone"] - [Supabase Tutorial](supabase-tutorial.md): [Supabase](https://supabase.com/) is an open source Firebase alternative. - [set callbacks](set-callbacks-4.md): litellm.success_callback=["supabase"] - [0 Critical/High Vulnerabilities](0-criticalhigh-vulnerabilities.md): - LiteLLMBase image now uses`cgr.dev/chainguard/python:latest-dev` - [Use the provided base image](use-the-provided-base-image.md): FROM ghcr.io/berriai/litellm:main-latest - [Install dependencies - CHANGE THIS to `apk`](install-dependencies-change-this-to-apk.md): RUN apt-get update && apt-get install -y dumb-init - [set api keys](set-api-keys.md): os.environ["DEEPGRAM_API_KEY"] = "" - [Tags](tags.md): - [admin ui3](https://docs.litellm.ai/release_notes/tags/admin-ui) - [set api keys](set-api-keys-2.md): os.environ["DEEPGRAM_API_KEY"] = "" - [set api keys](set-api-keys-3.md): os.environ["DEEPGRAM_API_KEY"] = "" - [set api keys](set-api-keys-4.md): os.environ["DEEPGRAM_API_KEY"] = "" - [0 Critical/High Vulnerabilities](0-criticalhigh-vulnerabilities-2.md): - LiteLLMBase image now uses`cgr.dev/chainguard/python:latest-dev` - [Use the provided base image](use-the-provided-base-image-2.md): FROM ghcr.io/berriai/litellm:main-latest - [Install dependencies - CHANGE THIS to `apk`](install-dependencies-change-this-to-apk-2.md): RUN apt-get update && apt-get install -y dumb-init - [set api keys](set-api-keys-5.md): os.environ["DEEPGRAM_API_KEY"] = "" - [0 Critical/High Vulnerabilities](0-criticalhigh-vulnerabilities-3.md): - LiteLLMBase image now uses`cgr.dev/chainguard/python:latest-dev` - [Use the provided base image](use-the-provided-base-image-3.md): FROM ghcr.io/berriai/litellm:main-latest - [Install dependencies - CHANGE THIS to `apk`](install-dependencies-change-this-to-apk-3.md): RUN apt-get update && apt-get install -y dumb-init - [Litellm Documentation](litellm-documentation.md) - [https://docs.litellm.ai/ llms-full.txt](httpsdocslitellmai-llms-fulltxt-2.md) - [LiteLLM - Getting Started](litellm-getting-started-2.md): - Translate inputs to provider's`completion`,`embedding`, and`image_generation`endpoints - [auth: run 'gcloud auth application-default'](auth-run-gcloud-auth-application-default-2.md): os.environ["VERTEX_PROJECT"] = "hardy-device-386718" - [e.g. Call 'WizardLM/WizardCoder-Python-34B-V1.0' hosted on HF Inference endpoints](eg-call-wizardlmwizardcoder-python-34b-v10-hosted-on-hf-inference-endpoints-2.md): response = completion( - [azure call](azure-call-2.md): response = completion( - [auth: run 'gcloud auth application-default'](auth-run-gcloud-auth-application-default-2-2.md): os.environ["VERTEX_PROJECT"] = "hardy-device-386718" - [e.g. Call 'WizardLM/WizardCoder-Python-34B-V1.0' hosted on HF Inference endpoints](eg-call-wizardlmwizardcoder-python-34b-v10-hosted-on-hf-inference-endpoints-2-2.md): response = completion( - [azure call](azure-call-2-2.md): response = completion( - [set callbacks](set-callbacks-2.md): litellm.success_callback = ["lunary", "mlflow", "langfuse", "helicone"] # log input/output to lunary, mlflow, langfus... - [track_cost_callback](track-cost-callback-2.md): def track_cost_callback( - [set callback](set-callback-2.md): litellm.success_callback = [track_cost_callback] # set custom callback function - [litellm.completion() call](litellmcompletion-call-2.md): response = completion( - [request sent to model set on litellm proxy, `litellm --model`](request-sent-to-model-set-on-litellm-proxy-litellm-model-2.md): response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [\ - [Completion Function - completion()](completion-function-completion-2.md): The Input params are **exactly the same** as the - [Request Body](request-body-2.md): **Required Fields** - [Completion Function - completion()](completion-function-completion-2-2.md): Here's the exact json output you can expect from a litellm`completion`call: - [Generation/Completion/Chat Completion Models](generationcompletionchat-completion-models-2.md): | Model Name | Function Call | Required OS Variables | - [Contact Us](contact-us-2.md): - [Meet with us 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version) - [Contributing to Documentation](contributing-to-documentation-2.md): Clone litellm - [Embedding Models](embedding-models-2.md): | Model Name | Function Call | Required OS Variables | - [Tutorial Intro](tutorial-intro-2.md): Let's discover **Docusaurus in less than 5 minutes**. - [Callbacks](callbacks-2.md): liteLLM provides`success_callbacks`and`failure_callbacks`, making it easy for you to send data to a particular pro... - [set callbacks](set-callbacks-2-2.md): litellm.success_callback=["posthog", "helicone", "lunary"] - [Helicone Tutorial](helicone-tutorial-2.md): [Helicone](https://helicone.ai/) is an open source observability platform that proxies your OpenAI traffic and provid... - [set callbacks](set-callbacks-3-2.md): litellm.success_callback=["helicone"] - [Supabase Tutorial](supabase-tutorial-2.md): [Supabase](https://supabase.com/) is an open source Firebase alternative. - [set callbacks](set-callbacks-4-2.md): litellm.success_callback=["supabase"] - [0 Critical/High Vulnerabilities](0-criticalhigh-vulnerabilities-2.md): - LiteLLMBase image now uses`cgr.dev/chainguard/python:latest-dev` - [Use the provided base image](use-the-provided-base-image-2.md): FROM ghcr.io/berriai/litellm:main-latest - [Install dependencies - CHANGE THIS to `apk`](install-dependencies-change-this-to-apk-2.md): RUN apt-get update && apt-get install -y dumb-init - [set api keys](set-api-keys-2.md): os.environ["DEEPGRAM_API_KEY"] = "" - [Tags](tags-2.md): - [admin ui3](https://docs.litellm.ai/release_notes/tags/admin-ui) - [set api keys](set-api-keys-2-2.md): os.environ["DEEPGRAM_API_KEY"] = "" - [set api keys](set-api-keys-3-2.md): os.environ["DEEPGRAM_API_KEY"] = "" - [set api keys](set-api-keys-4-2.md): os.environ["DEEPGRAM_API_KEY"] = "" - [0 Critical/High Vulnerabilities](0-criticalhigh-vulnerabilities-2-2.md): - LiteLLMBase image now uses`cgr.dev/chainguard/python:latest-dev` - [Use the provided base image](use-the-provided-base-image-2-2.md): FROM ghcr.io/berriai/litellm:main-latest - [Install dependencies - CHANGE THIS to `apk`](install-dependencies-change-this-to-apk-2-2.md): RUN apt-get update && apt-get install -y dumb-init - [set api keys](set-api-keys-5-2.md): os.environ["DEEPGRAM_API_KEY"] = "" - [0 Critical/High Vulnerabilities](0-criticalhigh-vulnerabilities-3-2.md): - LiteLLMBase image now uses`cgr.dev/chainguard/python:latest-dev` - [Use the provided base image](use-the-provided-base-image-3-2.md): FROM ghcr.io/berriai/litellm:main-latest - [Install dependencies - CHANGE THIS to `apk`](install-dependencies-change-this-to-apk-3-2.md): RUN apt-get update && apt-get install -y dumb-init