# Lmstudio > Documentation for Lmstudio ## Pages - [Lmstudio Documentation](lmstudio-documentation.md) - [Welcome to LM Studio Docs!](welcome-to-lm-studio-docs.md): Learn how to run Llama, DeepSeek, Qwen, Phi, and other LLMs locally with LM Studio. - [Use MCP servers in LM Studio](use-mcp-servers-in-lm-studio.md): Starting 0.3.17 (b10), LM Studio supports both local and remote MCP servers. You can add MCPs by editing the app's`m... - [Generate your own MCP install link](generate-your-own-mcp-install-link.md): Enter your MCP JSON entry to generate a deeplink for the`Add to LM Studio`button. - [model.yaml is an open standard for defining cross-platform, composable AI models](modelyaml-is-an-open-standard-for-defining-cross-platform-composable-ai-models.md) - [Learn more at https://modelyaml.org](learn-more-at-httpsmodelyamlorg.md): model: qwen/qwen3-8b - [Quickstart](quickstart.md): The easiest way to get started is by cloning an existing model, modifying it, and then running`lms push`. - [... the rest of the file](the-rest-of-the-file.md): Authenticate with the Hub from the command line: - [Import Presets](import-presets.md): First, click the presets dropdown in the sidebar. You will see a list of your presets along with 2 buttons:`+ New Pr... - [LM Studio Developer Docs](lm-studio-developer-docs.md): Build with LM Studio's local APIs and SDKs — TypeScript, Python, REST, and OpenAI‑compatible endpoints. - [... the rest of your code ...](the-rest-of-your-code.md): import OpenAI from 'openai'; - [Initialize OpenAI client that points to the local LM Studio server](initialize-openai-client-that-points-to-the-local-lm-studio-server.md): client = OpenAI( - [Define the conversation with the AI](define-the-conversation-with-the-ai.md): messages = [ - [Define the expected response structure](define-the-expected-response-structure.md): character_schema = { - [Get response from AI](get-response-from-ai.md): response = client.chat.completions.create( - [Parse and display the results](parse-and-display-the-results.md): results = json.loads(response.choices[0].message.content) - [Tools](tools.md): You may call one or more functions to assist with the user query. - [Connect to LM Studio](connect-to-lm-studio.md): client = OpenAI(base_url=" api_key="lm-studio") - [Define a simple function](define-a-simple-function.md): def say_hello(name: str) -> str: - [Tell the AI about our function](tell-the-ai-about-our-function.md): tools = [ - [Ask the AI to use our function](ask-the-ai-to-use-our-function.md): response = client.chat.completions.create( - [Get the name the AI wants to use a tool to say hello to](get-the-name-the-ai-wants-to-use-a-tool-to-say-hello-to.md) - [(Assumes the AI has requested a tool call and that tool call is say_hello)](assumes-the-ai-has-requested-a-tool-call-and-that-tool-call-is-say-hello.md): tool_call = response.choices[0].message.tool_calls[0] - [Actually call the say_hello function](actually-call-the-say-hello-function.md): say_hello(name) # Prints: Hello, Bob the Builder! - [Point to the local server](point-to-the-local-server.md): client = OpenAI(base_url=" api_key="lm-studio") - [LM Studio](lm-studio.md): response = client.chat.completions.create( - [Note this code assumes we have already determined that the model generated a function call.](note-this-code-assumes-we-have-already-determined-that-the-model-generated-a-fun.md): tool_call = response.choices[0].message.tool_calls[0] - [Call the get_delivery_date function with the extracted order_id](call-the-get-delivery-date-function-with-the-extracted-order-id.md): delivery_date = get_delivery_date(order_id) - [Create a message containing the result of the function call](create-a-message-containing-the-result-of-the-function-call.md): function_call_result_message = { - [Prepare the chat completion call payload](prepare-the-chat-completion-call-payload.md): completion_messages_payload = [ - [Call the OpenAI API's chat completions endpoint to send the tool call result back to the model](call-the-openai-apis-chat-completions-endpoint-to-send-the-tool-call-result-back.md) - [LM Studio](lm-studio-2.md): response = client.chat.completions.create( - [Point to the local server](point-to-the-local-server-2.md): client = OpenAI(base_url=" api_key="lm-studio") - [`lmstudio-python` (Python SDK)](lmstudio-python-python-sdk.md): Getting started with LM Studio's Python SDK - [Interactive Convenience, Deterministic Resource Management, or Structured Concurrency?](interactive-convenience-deterministic-resource-management-or-structured-concurre.md): As shown in the example above, there are three distinct approaches for working - [A JSON schema for a book](a-json-schema-for-a-book.md): schema = { - [Inference Parameters](inference-parameters.md): Set inference-time parameters such as`temperature`,`maxTokens`,`topP`and more. - [Load Parameters](load-parameters.md): Set load-time parameters such as the context length, GPU offload ratio, and more. - [`lmstudio-js` (TypeScript SDK)](lmstudio-js-typescript-sdk.md): Getting started with LM Studio's Typescript / JavaScript SDK - [Inference Parameters](inference-parameters-2.md): Set inference-time parameters such as`temperature`,`maxTokens`,`topP`and more. - [Load Parameters](load-parameters-2.md): Set load-time parameters such as the context length, GPU offload ratio, and more. - [`lms` — LM Studio's CLI](lms-lm-studios-cli.md): Get starting with the`lms`command line utility. - [Only the formatted user input](only-the-formatted-user-input.md): lms log stream --source model --filter input - [Only the model output (emitted once the message completes)](only-the-model-output-emitted-once-the-message-completes.md): lms log stream --source model --filter output - [Both directions](both-directions.md): lms log stream --source model --filter input,output