# Llama Cpp > Import the`examples/llama.android`directory into Android Studio, then perform a Gradle sync and build the project. ## Pages - [Android](docs-android.md): Import the`examples/llama.android`directory into Android Studio, then perform a Gradle sync and build the project. - [will install to /usr/local/ by default.](docs-backend-blis.md): BLIS Installation Manual - [llama.cpp for CANN](docs-backend-cann.md): - Background - [Setting Up CUDA on Fedora](docs-backend-cuda-fedora.md): In this guide we setup [Nvidia CUDA](https://docs.nvidia.com/cuda/) in a toolbox container. This guide is applicable ... - [llama.cpp for OpenCL](docs-backend-opencl.md): - Background - [llama.cpp for SYCL](docs-backend-sycl.md): - Background - [llama.cpp for AMD ZenDNN](docs-backend-zendnn.md): [!WARNING] - [Hexagon backend developer details](docs-backend-hexagon-developer.md): The Hexagon backend consist of two parts: - [llama.cpp for IBM zDNN Accelerator](docs-backend-zdnn.md): [!WARNING] - [Build Riscv64 Spacemit](docs-build-riscv64-spacemit.md): [!IMPORTANT] - [Build llama.cpp locally (for s390x)](docs-build-s390x.md): [!IMPORTANT] - [Build llama.cpp locally](docs-build.md): The main product of this project is the`llama`library. Its C-style interface can be found in [include/llama.h](../i... - [Add a new model architecture to `llama.cpp`](docs-development-howto-add-model.md): Adding a model requires few steps: - [Debugging Tests Tips](docs-development-debugging-tests.md): There is a script called debug-test.sh in the scripts folder whose parameter takes a REGEX and an optional test number. - [Parsing Model Output](docs-development-parsing.md): The`common`library contains a PEG parser implementation suitable for parsing - [Token generation performance troubleshooting](docs-development-token-generation-performance-tips.md): Make sure you compiled llama with the correct env variables according to [this guide](/docs/build.md#cuda), so that l... - [Docker](docs-docker.md): * Docker must be installed and running on your system. - [Function Calling](docs-function-calling.md): [chat.h](../common/chat.h) (https://github.com/ggml-org/llama.cpp/pull/9639) adds support for [OpenAI-style function ... - [Install pre-built version of llama.cpp](docs-install.md): | Install via | Windows | Mac | Linux | - [LLGuidance Support in llama.cpp](docs-llguidance.md): [LLGuidance](https://github.com/guidance-ai/llguidance) is a library for constrained decoding (also called constraine... - [MobileVLM](docs-multimodal-mobilevlm.md): Currently this implementation supports [MobileVLM-1.7B](https://huggingface.co/mtgv/MobileVLM-1.7B) / [MobileVLM_V2-1... - [Gemma 3 vision](docs-multimodal-gemma3.md): [!IMPORTANT] - [GLMV-EDGE](docs-multimodal-glmedge.md): Currently this implementation supports [glm-edge-v-2b](https://huggingface.co/THUDM/glm-edge-v-2b) and [glm-edge-v-5b... - [Granite Vision](docs-multimodal-granitevision.md): Download the model and point your`GRANITE_MODEL`environment variable to the path. - [LLaVA](docs-multimodal-llava.md): Currently this implementation supports [llava-v1.5](https://huggingface.co/liuhaotian/llava-v1.5-7b) variants, - [quantize int4 version](docs-multimodal-minicpmo26.md): Currently, this readme only supports minicpm-omni's image capabilities, and we will update the full-mode support as s... - [quantize int4 version](docs-multimodal-minicpmo40.md): Download [MiniCPM-o-4](https://huggingface.co/openbmb/MiniCPM-o-4) PyTorch model from huggingface to "MiniCPM-o-4" fo... - [quantize int4 version](docs-multimodal-minicpmv25.md): Download [MiniCPM-Llama3-V-2_5](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5) PyTorch model from huggingface t... - [quantize int4 version](docs-multimodal-minicpmv26.md): Download [MiniCPM-V-2_6](https://huggingface.co/openbmb/MiniCPM-V-2_6) PyTorch model from huggingface to "MiniCPM-V-2... - [quantize int4 version](docs-multimodal-minicpmv40.md): Download [MiniCPM-V-4](https://huggingface.co/openbmb/MiniCPM-V-4) PyTorch model from huggingface to "MiniCPM-V-4" fo... - [quantize int4 version](docs-multimodal-minicpmv45.md): Download [MiniCPM-V-4_5](https://huggingface.co/openbmb/MiniCPM-V-4_5) PyTorch model from huggingface to "MiniCPM-V-4... - [Multimodal](docs-multimodal.md): llama.cpp supports multimodal input via`libmtmd`. Currently, there are 2 tools support this feature: - [GGML Operations](docs-ops.md): List of GGML operations and backend support status. - [llama.cpp INI Presets](docs-preset.md): The INI preset feature, introduced in [PR#17859](https://github.com/ggml-org/llama.cpp/pull/17859), allows users to c...