# Mistral > Documentation for Mistral ## Pages - [Mistral Documentation](mistral-documentation.md) - [List Models](list-models.md): get /v1/models - [Retrieve Model](retrieve-model.md): get /v1/models/{model_id} - [Delete Model](delete-model.md): del /v1/models/{model_id} - [Create a conversation and append entries to it.](create-a-conversation-and-append-entries-to-it.md): post /v1/conversations - [List all created conversations.](list-all-created-conversations.md): get /v1/conversations - [Retrieve a conversation information.](retrieve-a-conversation-information.md): get /v1/conversations/{conversation_id} - [Append new entries to an existing conversation.](append-new-entries-to-an-existing-conversation.md): post /v1/conversations/{conversation_id} - [Retrieve all entries in a conversation.](retrieve-all-entries-in-a-conversation.md): get /v1/conversations/{conversation_id}/history - [Retrieve all messages in a conversation.](retrieve-all-messages-in-a-conversation.md): get /v1/conversations/{conversation_id}/messages - [Restart a conversation starting from a given entry.](restart-a-conversation-starting-from-a-given-entry.md): post /v1/conversations/{conversation_id}/restart - [Create a agent that can be used within a conversation.](create-a-agent-that-can-be-used-within-a-conversation.md): post /v1/agents - [List agent entities.](list-agent-entities.md): get /v1/agents - [Retrieve an agent entity.](retrieve-an-agent-entity.md): get /v1/agents/{agent_id} - [Update an agent entity.](update-an-agent-entity.md): patch /v1/agents/{agent_id} - [Update an agent version.](update-an-agent-version.md): patch /v1/agents/{agent_id}/version - [Create a conversation and append entries to it.](create-a-conversation-and-append-entries-to-it-2.md): post /v1/conversations#stream - [Append new entries to an existing conversation.](append-new-entries-to-an-existing-conversation-2.md): post /v1/conversations/{conversation_id}#stream - [Restart a conversation starting from a given entry.](restart-a-conversation-starting-from-a-given-entry-2.md): post /v1/conversations/{conversation_id}/restart#stream - [Upload File](upload-file.md): post /v1/files - [List Files](list-files.md): get /v1/files - [Retrieve File](retrieve-file.md): get /v1/files/{file_id} - [Delete File](delete-file.md): del /v1/files/{file_id} - [Download File](download-file.md): get /v1/files/{file_id}/content - [Get Signed Url](get-signed-url.md): get /v1/files/{file_id}/url - [Get Fine Tuning Jobs](get-fine-tuning-jobs.md): get /v1/fine_tuning/jobs - [Create Fine Tuning Job](create-fine-tuning-job.md): post /v1/fine_tuning/jobs - [Get Fine Tuning Job](get-fine-tuning-job.md): get /v1/fine_tuning/jobs/{job_id} - [Cancel Fine Tuning Job](cancel-fine-tuning-job.md): post /v1/fine_tuning/jobs/{job_id}/cancel - [Start Fine Tuning Job](start-fine-tuning-job.md): post /v1/fine_tuning/jobs/{job_id}/start - [Update Fine Tuned Model](update-fine-tuned-model.md): patch /v1/fine_tuning/models/{model_id} - [Archive Fine Tuned Model](archive-fine-tuned-model.md): post /v1/fine_tuning/models/{model_id}/archive - [Unarchive Fine Tuned Model](unarchive-fine-tuned-model.md): del /v1/fine_tuning/models/{model_id}/archive - [Get Batch Jobs](get-batch-jobs.md): get /v1/batch/jobs - [Create Batch Job](create-batch-job.md): post /v1/batch/jobs - [Get Batch Job](get-batch-job.md): get /v1/batch/jobs/{job_id} - [Cancel Batch Job](cancel-batch-job.md): post /v1/batch/jobs/{job_id}/cancel - [Chat Completion](chat-completion.md): post /v1/chat/completions - [Fim Completion](fim-completion.md): post /v1/fim/completions - [Agents Completion](agents-completion.md): post /v1/agents/completions - [Embeddings](embeddings.md): post /v1/embeddings - [Moderations](moderations.md): post /v1/moderations - [Chat Moderations](chat-moderations.md): post /v1/chat/moderations - [OCR](ocr.md): post /v1/ocr - [Classifications](classifications.md): post /v1/classifications - [Chat Classifications](chat-classifications.md): post /v1/chat/classifications - [List all libraries you have access to.](list-all-libraries-you-have-access-to.md): get /v1/libraries - [Create a new Library.](create-a-new-library.md): post /v1/libraries - [Detailed information about a specific Library.](detailed-information-about-a-specific-library.md): get /v1/libraries/{library_id} - [Delete a library and all of it's document.](delete-a-library-and-all-of-its-document.md): del /v1/libraries/{library_id} - [Update a library.](update-a-library.md): put /v1/libraries/{library_id} - [List document in a given library.](list-document-in-a-given-library.md): get /v1/libraries/{library_id}/documents - [Upload a new document.](upload-a-new-document.md): post /v1/libraries/{library_id}/documents - [List all of the access to this library.](list-all-of-the-access-to-this-library.md): get /v1/libraries/{library_id}/share - [Create or update an access level.](create-or-update-an-access-level.md): put /v1/libraries/{library_id}/share - [Delete an access level.](delete-an-access-level.md): del /v1/libraries/{library_id}/share - [Retrieve the metadata of a specific document.](retrieve-the-metadata-of-a-specific-document.md): get /v1/libraries/{library_id}/documents/{document_id} - [Update the metadata of a specific document.](update-the-metadata-of-a-specific-document.md): put /v1/libraries/{library_id}/documents/{document_id} - [Delete a document.](delete-a-document.md): del /v1/libraries/{library_id}/documents/{document_id} - [Retrieve the text content of a specific document.](retrieve-the-text-content-of-a-specific-document.md): get /v1/libraries/{library_id}/documents/{document_id}/text_content - [Retrieve the processing status of a specific document.](retrieve-the-processing-status-of-a-specific-document.md): get /v1/libraries/{library_id}/documents/{document_id}/status - [Retrieve the signed URL of a specific document.](retrieve-the-signed-url-of-a-specific-document.md): get /v1/libraries/{library_id}/documents/{document_id}/signed-url - [Retrieve the signed URL of text extracted from a given document.](retrieve-the-signed-url-of-text-extracted-from-a-given-document.md): get /v1/libraries/{library_id}/documents/{document_id}/extracted-text-signed-url - [Reprocess a document.](reprocess-a-document.md): post /v1/libraries/{library_id}/documents/{document_id}/reprocess - [Upload document](upload-document.md): file_path = "mistral7b.pdf" - [Check status document](check-status-document.md): status = client.beta.libraries.documents.status(library_id=new_library.id, document_id=uploaded_doc.id) - [Waiting for process to finish](waiting-for-process-to-finish.md): while status.processing_status == "Running": - [Get document info once processed](get-document-info-once-processed.md): uploaded_doc = client.beta.libraries.documents.get(library_id=new_library.id, document_id=uploaded_doc.id) - [There is also extracted_text signed_url and raw signed_url](there-is-also-extracted-text-signed-url-and-raw-signed-url.md): print(extracted_text) - [Get document info once processed](get-document-info-once-processed-2.md): deleted_library = client.beta.libraries.delete(library_id=new_library.id) - [deleted_document = client.beta.libraries.documents.delete(library_id=new_library.id, document_id=uploaded_doc.id)](deleted-document-clientbetalibrariesdocumentsdeletelibrary-idnew-libraryid-docum.md): // Get document info once processed - [Download using the ToolFileChunk ID](download-using-the-toolfilechunk-id.md): file_bytes = client.files.download(file_id=file_chunk.file_id).read() - [Save the file locally](save-the-file-locally.md): with open(f"image_generated.png", "wb") as file: - [Create your agents](create-your-agents.md): finance_agent = client.beta.agents.create( - [Allow the finance_agent to handoff the conversation to the ecb_interest_rate_agent or web_search_agent](allow-the-finance-agent-to-handoff-the-conversation-to-the-ecb-interest-rate-age.md): finance_agent = client.beta.agents.update( - [Allow the ecb_interest_rate_agent to handoff the conversation to the graph_agent or calculator_agent](allow-the-ecb-interest-rate-agent-to-handoff-the-conversation-to-the-graph-agent.md): ecb_interest_rate_agent = client.beta.agents.update( - [Allow the web_search_agent to handoff the conversation to the graph_agent or calculator_agent](allow-the-web-search-agent-to-handoff-the-conversation-to-the-graph-agent-or-cal.md): web_search_agent = client.beta.agents.update( - [Set the current working directory and model to use](set-the-current-working-directory-and-model-to-use.md): cwd = Path(__file__).parent - [Set the current working directory and model to use](set-the-current-working-directory-and-model-to-use-2.md): cwd = Path(__file__).parent - [Set the model to use and callback port for OAuth](set-the-model-to-use-and-callback-port-for-oauth.md): MODEL = "mistral-medium-latest" - [Retrieve the API key from environment variables](retrieve-the-api-key-from-environment-variables.md): api_key = os.environ["MISTRAL_API_KEY"] - [Initialize the Mistral client](initialize-the-mistral-client.md): client = Mistral(api_key=api_key) - [Encode the audio file in base64](encode-the-audio-file-in-base64.md): with open("examples/files/bcn_weather.mp3", "rb") as f: - [Get the chat response](get-the-chat-response.md): chat_response = client.chat.complete( - [Print the content of the response](print-the-content-of-the-response.md): print(chat_response.choices[0].message.content) - [Retrieve the API key from environment variables](retrieve-the-api-key-from-environment-variables-2.md): api_key = os.environ["MISTRAL_API_KEY"] - [Initialize the Mistral client](initialize-the-mistral-client-2.md): client = Mistral(api_key=api_key) - [Define the messages for the chat](define-the-messages-for-the-chat.md): messages = [ - [Get the chat response](get-the-chat-response-2.md): chat_response = client.chat.complete( - [Print the content of the response](print-the-content-of-the-response-2.md): print(chat_response.choices[0].message.content) - [Retrieve the API key from environment variables](retrieve-the-api-key-from-environment-variables-3.md): api_key = os.environ["MISTRAL_API_KEY"] - [Initialize the Mistral client](initialize-the-mistral-client-3.md): client = Mistral(api_key=api_key) - [If local audio, upload and retrieve the signed url](if-local-audio-upload-and-retrieve-the-signed-url.md): with open("music.mp3", "rb") as f: - [Define the messages for the chat](define-the-messages-for-the-chat-2.md): messages = [ - [Get the chat response](get-the-chat-response-3.md): chat_response = client.chat.complete( - [Print the content of the response](print-the-content-of-the-response-3.md): print(chat_response.choices[0].message.content) - [Retrieve the API key from environment variables](retrieve-the-api-key-from-environment-variables-4.md): api_key = os.environ["MISTRAL_API_KEY"] - [Initialize the Mistral client](initialize-the-mistral-client-4.md): client = Mistral(api_key=api_key) - [Get the transcription](get-the-transcription.md): with open("/path/to/file/audio.mp3", "rb") as f: - [Print the content of the response](print-the-content-of-the-response-4.md): print(transcription_response) - [Retrieve the API key from environment variables](retrieve-the-api-key-from-environment-variables-5.md): api_key = os.environ["MISTRAL_API_KEY"] - [Initialize the Mistral client](initialize-the-mistral-client-5.md): client = Mistral(api_key=api_key) - [Get the transcription](get-the-transcription-2.md): transcription_response = client.audio.transcriptions.complete( - [Print the content of the response](print-the-content-of-the-response-5.md): print(transcription_response) - [Retrieve the API key from environment variables](retrieve-the-api-key-from-environment-variables-6.md): api_key = os.environ["MISTRAL_API_KEY"] - [Initialize the Mistral client](initialize-the-mistral-client-6.md): client = Mistral(api_key=api_key) - [If local audio, upload and retrieve the signed url](if-local-audio-upload-and-retrieve-the-signed-url-2.md): with open("local_audio.mp3", "rb") as f: - [Get the transcription](get-the-transcription-3.md): transcription_response = client.audio.transcriptions.complete( - [Print the content of the response](print-the-content-of-the-response-6.md): print(transcription_response) - [Retrieve the API key from environment variables](retrieve-the-api-key-from-environment-variables-7.md): api_key = os.environ["MISTRAL_API_KEY"] - [Initialize the Mistral client](initialize-the-mistral-client-7.md): client = Mistral(api_key=api_key) - [Transcribe the audio with timestamps](transcribe-the-audio-with-timestamps.md): transcription_response = client.audio.transcriptions.complete( - [Print the contents](print-the-contents.md): print(transcription_response) - [Write and save the file](write-and-save-the-file.md): with open('batch_results.jsonl', 'wb') as f: - [Append the tool call message to the chat_history](append-the-tool-call-message-to-the-chat-history.md): chat_history.append(tool_call_result) - [Print the main response and save each reference](print-the-main-response-and-save-each-reference.md): for chunk in chat_response.choices[0].message.content: - [Print references only](print-references-only.md): if refs_used: - [Full Cookbook](full-cookbook.md): You can find a comprehensive cookbook exploring Citations and References leveraging RAG with Wikipedia [here](https:/... - [make sure to install `langchain` and `langchain-mistralai` in your Python environment](make-sure-to-install-langchain-and-langchain-mistralai-in-your-python-environmen.md): from langchain_mistralai import ChatMistralAI - [make sure to install `llama-index` and `llama-index-llms-mistralai` in your Python enviornment](make-sure-to-install-llama-index-and-llama-index-llms-mistralai-in-your-python-e.md): from llama_index.core.llms import ChatMessage - [Annotations](annotations.md): In addition to the basic OCR functionality, Mistral Document AI API adds the`annotations`functionality, which allow... - [BBOX Annotation response formats](bbox-annotation-response-formats.md): class Image(BaseModel): - [BBOX Annotation response formats](bbox-annotation-response-formats-2.md): class Image(BaseModel): - [Client call](client-call.md): response = client.ocr.process( - [Document Annotation response format](document-annotation-response-format.md): class Document(BaseModel): - [Client call](client-call-2.md): response = client.ocr.process( - [BBOX Annotation response format](bbox-annotation-response-format.md): class Image(BaseModel): - [Document Annotation response format](document-annotation-response-format-2.md): class Document(BaseModel): - [BBOX Annotation response format with description](bbox-annotation-response-format-with-description.md): class Image(BaseModel): - [Document Annotation response format](document-annotation-response-format-3.md): class Document(BaseModel): - [Client call](client-call-3.md): response = client.ocr.process( - [Path to your pdf](path-to-your-pdf.md): pdf_path = "path_to_your_pdf.pdf" - [Getting the base64 string](getting-the-base64-string.md): base64_pdf = encode_pdf(pdf_path) - [Path to your image](path-to-your-image.md): image_path = "path_to_your_image.jpg" - [Getting the base64 string](getting-the-base64-string-2.md): base64_image = encode_image(image_path) - [Mistral Document AI](mistral-document-ai.md): src="/img/document_ai_overview.png" - [Document AI QnA](document-ai-qna.md): The Document QnA capability combines OCR with large language model capabilities to enable natural language interactio... - [Retrieve the API key from environment variables](retrieve-the-api-key-from-environment-variables-8.md): api_key = os.environ["MISTRAL_API_KEY"] - [Initialize the Mistral client](initialize-the-mistral-client-8.md): client = Mistral(api_key=api_key) - [If local document, upload and retrieve the signed url](if-local-document-upload-and-retrieve-the-signed-url.md) - ["content": open("uploaded_file.pdf", "rb"),](content-openuploaded-filepdf-rb.md) - [signed_url = client.files.get_signed_url(file_id=uploaded_pdf.id)](signed-url-clientfilesget-signed-urlfile-iduploaded-pdfid.md) - [Define the messages for the chat](define-the-messages-for-the-chat-3.md): messages = [ - [Get the chat response](get-the-chat-response-4.md): chat_response = client.chat.complete( - [Print the content of the response](print-the-content-of-the-response-7.md): print(chat_response.choices[0].message.content) - [The last sentence in the document is:\n\n\"Zaremba, W., Sutskever, I., and Vinyals, O. Recurrent neural network regularization. arXiv:1409.2329, 2014.](the-last-sentence-in-the-document-isnnzaremba-w-sutskever-i-and-vinyals-o-recurr.md): // import fs from 'fs'; - [Create a train / test split](create-a-train-test-split.md): train_x, test_x, train_y, test_y = train_test_split( - [Normalize features](normalize-features.md): scaler = StandardScaler() - [Train a classifier and compute the test accuracy](train-a-classifier-and-compute-the-test-accuracy.md) - [For a real problem, C should be properly cross validated and the confusion matrix analyzed](for-a-real-problem-c-should-be-properly-cross-validated-and-the-confusion-matrix.md): clf = LogisticRegression(random_state=0, C=1.0, max_iter=500).fit( - [clf = LogisticRegression(random_state=0, C=1.0, max_iter=1000, solver='sag').fit(train_x, train_y)](clf-logisticregressionrandom-state0-c10-max-iter1000-solversagfittrain-x-train-y.md): print(f"Precision: {100*np.mean(clf.predict(test_x) == test_y.to_list()):.2f}%") - [Classify a single example](classify-a-single-example.md): text = "I've been experiencing frequent headaches and vision problems." - [Create a train / test split](create-a-train-test-split-2.md): train_x, test_x, train_y, test_y = train_test_split( - [Normalize features](normalize-features-2.md): scaler = StandardScaler() - [Train a classifier and compute the test accuracy](train-a-classifier-and-compute-the-test-accuracy-2.md) - [For a real problem, C should be properly cross validated and the confusion matrix analyzed](for-a-real-problem-c-should-be-properly-cross-validated-and-the-confusion-matrix-2.md): clf = LogisticRegression(random_state=0, C=1.0, max_iter=500).fit( - [clf = LogisticRegression(random_state=0, C=1.0, max_iter=1000, solver='sag').fit(train_x, train_y)](clf-logisticregressionrandom-state0-c10-max-iter1000-solversagfittrain-x-train-y-2.md): print(f"Precision: {100*np.mean(clf.predict(test_x) == test_y.to_list()):.2f}%") - [create a fine-tuning job](create-a-fine-tuning-job.md): created_jobs = client.fine_tuning.jobs.create( - []](untitled.md): ) - [start a fine-tuning job](start-a-fine-tuning-job.md): client.fine_tuning.jobs.start(job_id = created_jobs.id) - [List jobs](list-jobs.md): jobs = client.fine_tuning.jobs.list() - [Retrieve a jobs](retrieve-a-jobs.md): retrieved_jobs = client.fine_tuning.jobs.get(job_id = created_jobs.id) - [Cancel a jobs](cancel-a-jobs.md): canceled_jobs = client.fine_tuning.jobs.cancel(job_id = created_jobs.id) - [List jobs](list-jobs-2.md): curl \ - [Retrieve a job](retrieve-a-job.md): curl \ - [Cancel a job](cancel-a-job.md): curl -X POST \ - [create a fine-tuning job](create-a-fine-tuning-job-2.md): created_jobs = client.fine_tuning.jobs.create( - []](untitled-2.md): ) - [start a fine-tuning job](start-a-fine-tuning-job-2.md): client.fine_tuning.jobs.start(job_id = created_jobs.id) - [List jobs](list-jobs-3.md): jobs = client.fine_tuning.jobs.list() - [Retrieve a jobs](retrieve-a-jobs-2.md): retrieved_jobs = client.fine_tuning.jobs.get(job_id = created_jobs.id) - [Cancel a jobs](cancel-a-jobs-2.md): canceled_jobs = client.fine_tuning.jobs.cancel(job_id = created_jobs.id) - [List jobs](list-jobs-4.md): curl \ - [Retrieve a job](retrieve-a-job-2.md): curl \ - [Cancel a job](cancel-a-job-2.md): curl -X POST \ - [Assuming we have the following data](assuming-we-have-the-following-data.md): data = { - [Create DataFrame](create-dataframe.md): df = pd.DataFrame(data) - [Custom Structured Outputs](custom-structured-outputs.md): Custom Structured Outputs allow you to ensure the model provides an answer in a very specific JSON format by supplyin... - [Structured Output](structured-output.md): When utilizing LLMs as agents or steps within a lengthy process, chain, or pipeline, it is often necessary for the ou... - [Retrieve the API key from environment variables](retrieve-the-api-key-from-environment-variables-9.md): api_key = os.environ["MISTRAL_API_KEY"] - [Initialize the Mistral client](initialize-the-mistral-client-9.md): client = Mistral(api_key=api_key) - [Define the messages for the chat](define-the-messages-for-the-chat-4.md): messages = [ - [Get the chat response](get-the-chat-response-5.md): chat_response = client.chat.complete( - [Print the content of the response](print-the-content-of-the-response-8.md): print(chat_response.choices[0].message.content) - [Path to your image](path-to-your-image-2.md): image_path = "path_to_your_image.jpg" - [Getting the base64 string](getting-the-base64-string-3.md): base64_image = encode_image(image_path) - [Retrieve the API key from environment variables](retrieve-the-api-key-from-environment-variables-10.md): api_key = os.environ["MISTRAL_API_KEY"] - [Initialize the Mistral client](initialize-the-mistral-client-10.md): client = Mistral(api_key=api_key) - [Define the messages for the chat](define-the-messages-for-the-chat-5.md): messages = [ - [Get the chat response](get-the-chat-response-6.md): chat_response = client.chat.complete( - [Print the content of the response](print-the-content-of-the-response-9.md): print(chat_response.choices[0].message.content) - [Letters Orders and Instructions December 1855\n\n**Hoag's Company, if any opportunity offers.**\n\nYou are to be particularly exact and careful in these pagineries, that there is no disgrace meet between the Returns and you Pay Roll, or those who will be strict examining into it hereafter.\n\nI am & c.\n\n*[Signed]*\nEff.](letters-orders-and-instructions-december-1855nnhoags-company-if-any-opportunity-.md): curl \ - [Your huggingface token (HF_AUTH_TOKEN) should be stored in your project secrets on your Cerebrium dashboard](your-huggingface-token-hf-auth-token-should-be-stored-in-your-project-secrets-on.md): login(token=get_secret("HF_AUTH_TOKEN")) - [Initialize the model](initialize-the-model.md): llm = LLM(model="mistralai/Mistral-7B-Instruct-v0.3", dtype="bfloat16", max_model_len=20000, gpu_memory_utilization=0.9) - [Your huggingface token (HF_AUTH_TOKEN) should be stored in your project secrets on your Cerebrium dashboard](your-huggingface-token-hf-auth-token-should-be-stored-in-your-project-secrets-on-2.md): login(token=get_secret("HF_AUTH_TOKEN")) - [Initialize the model](initialize-the-model-2.md): llm = LLM(model="mistralai/Mistral-7B-Instruct-v0.3", dtype="bfloat16", max_model_len=20000, gpu_memory_utilization=0.9) - [init the client but point it to TGI](init-the-client-but-point-it-to-tgi.md): client = MistralClient(api_key="-", endpoint=" - [init the client but point it to TGI](init-the-client-but-point-it-to-tgi-2.md): client = OpenAI(api_key="-", base_url=" - [Make sure to install the huggingface_hub package before](make-sure-to-install-the-huggingface-hub-package-before.md): from huggingface_hub import InferenceClient - [Here is a possible function in Python to find the maximum number of segments that can be formed from a given length `n` using segments of lengths `a`, `b`, and `c`:](here-is-a-possible-function-in-python-to-find-the-maximum-number-of-segments-tha.md): def max_segments(n, a, b, c): - [This function uses nested loops to generate all possible combinations of segments of lengths `a`, `b`, and `c`, respectively. For each combination, it checks if the total length of the segments is equal to `n`, and if so, it updates the maximum number of segments found so far. The function returns the maximum number of segments that can be formed from `n`.](this-function-uses-nested-loops-to-generate-all-possible-combinations-of-segment.md): Here is another example of Mistral Large writing a function for computing square roots using the babylonian method. - [Basic RAG](basic-rag.md): Retrieval-augmented generation (RAG) is an AI framework that synergizes the capabilities of LLMs and information retr... - [Welcome to the Mistral AI Ambassador Program!](welcome-to-the-mistral-ai-ambassador-program.md): As our Mistral AI community continues to grow, we are looking for Mistral experts who are passionate about our models... - [➡️ Apply ➡️](apply.md): Applications for the Summer 2025 cohort are now open and will be accepted until July 1, 2025. If selected, you will b... - [🤠 Meet our current Ambassadors 🤠](meet-our-current-ambassadors.md): Thank you to each and every one of you, including those who prefer not to be named, for contributing to our community! - [➡️ Program details ➡️](program-details.md): - **Free credits:** Mistral Ambassadors will receive free API credits on la Plateforme. - [📝 Minimum requirements](minimum-requirements.md): - **Monthly Requirement:** Contribute at least one piece of content/event or show a significant amount of community s... - [Are you ready?](are-you-ready.md): - ✍ [fill out the application here](https://forms.gle/pTMchkVVPCxSVW5u5) ✍ - [How to contribute](how-to-contribute.md): Thank you for your interest in contributing to Mistral AI. We welcome everyone who wishes to contribute and we apprec... - [define prompt template](define-prompt-template.md): prompt_template = """ - [for each test case](for-each-test-case.md): for name in prompts: - [calculate accuracy rate across test cases](calculate-accuracy-rate-across-test-cases.md): sum(accuracy_rates) / len(accuracy_rates) - [define prompt template](define-prompt-template-2.md): prompt_template = """Write a Python function to execute the following task: {task} - [example using code_eval:](example-using-code-eval.md): pass_at_1, results = code_eval.compute( - ['completion_id': 0})]}))](completion-id-0.md): - Step 3: Calculate accuracy rate across test cases - [evaluate code generation](evaluate-code-generation.md): pass_at_1, results = code_eval.compute(references=refs, predictions=preds) - [{'pass@1': 1.0}](pass1-10.md): Using a Large Language Model (LLM) to evaluate or judge the output of another LLM is a common practice in situations ... - [Summary](summary.md): {summary} - [{"readability": 3}](readability-3.md): Human-based evaluation is likely to provide the most accurate and reliable evaluation results. However, it's difficul... - [download the validation and reformat script](download-the-validation-and-reformat-script.md): wget - [validate and reformat the training data](validate-and-reformat-the-training-data.md): python reformat_data.py ultrachat_chunk_train.jsonl - [validate the reformat the eval data](validate-the-reformat-the-eval-data.md): python reformat_data.py ultrachat_chunk_eval.jsonl - [create a fine-tuning job](create-a-fine-tuning-job-3.md): created_jobs = client.fine_tuning.jobs.create( - [start a fine-tuning job](start-a-fine-tuning-job-3.md): client.fine_tuning.jobs.start(job_id = created_jobs.id) - [Retrieve a jobs](retrieve-a-jobs-3.md): retrieved_jobs = client.fine_tuning.jobs.get(job_id = created_jobs.id) - [Retrieve a job](retrieve-a-job-3.md): curl \ - [get data from hugging face](get-data-from-hugging-face.md): ds = load_dataset("HuggingFaceH4/ultrachat_200k",split="train_gen") - [save data into .jsonl. This file is about 1.3GB](save-data-into-jsonl-this-file-is-about-13gb.md): with open('train.jsonl', 'w') as f: - [reformat data](reformat-data.md): !wget - [Split file into three chunks](split-file-into-three-chunks.md): input_file = "train.jsonl" - [open the output files](open-the-output-files.md): output_file_objects = [open(file, "w") for file in output_files] - [counter for output files](counter-for-output-files.md): counter = 0 - [close the output files](close-the-output-files.md): for file in output_file_objects: - [now you should see three jsonl files under 500MB](now-you-should-see-three-jsonl-files-under-500mb.md): [Observability] - [Prefix: Use Cases](prefix-use-cases.md): Prefixes are one feature that can easily be game-changing for many use cases and scenarios, while the concept is simp... - [Prompting Capabilities](prompting-capabilities.md): When you first start using Mistral models, your first interaction will revolve around prompts. The art of crafting ef... - [Instructions:](instructions.md): In clear and concise language, summarize the key points and themes presented in the essay. - [Facts](facts.md): 30-year fixed-rate: interest rate 6.403%, APR 6.484% - [Email](email.md): {insert customer email here} - [Essay:](essay.md): {insert essay text here} - [Essay:](essay-2.md): {insert essay text here} - [essay:](essay-3.md): {insert essay text here} - [Summaries](summaries.md): {insert the previous output} - [Sampling: Overview on our sampling settings](sampling-overview-on-our-sampling-settings.md): Here, we will discuss the sampling settings that influence the output of Large Language Models (LLMs). This guide cov... - [Top P](top-p.md): **Top P** is a setting that limits the tokens considered by a language model based on a probability threshold. It hel... - [Presence/Frequency Penalty](presencefrequency-penalty.md): **Presence Penalty** determines how much the model penalizes the repetition of words or phrases. It encourages the mo... - [Tokenize a list of messages](tokenize-a-list-of-messages.md): tokenized = tokenizer.encode_chat_completion(