# Llmware > description: community resources, getting help and sharing ideas ## Pages - [Community](community-community.md): Welcome to the llmware community! We are on a mission to pioneer the use of small language models as transformation... - [FAQ](community-faq.md): You can set the chunk size with the``chunk_size``parameter of the``add_files``method. - [Join Our Community](community-join-our-community.md): ___ - [Need Hep](community-need-help.md): ___ - [Troubleshooting](community-troubleshooting.md): ___ - [Agent Inference Server](components-agent-inference-server.md): LLMWare supports multiple deployment options, including the use of REST APIs to implement most model invocations. - [Agents](components-agents.md): Agents with Function Calls and SLIM Models 🔥 - [Components](components-components.md): llmware is characterized by a logically integrated set of data pipelines involved in building LLM-based workflows, ce... - [Data Stores](components-data-stores.md): Simple-to-Scale Database Options - integrated data stores from laptop to parallelized cluster. - [Embedding Models](components-embedding-models.md): llmware supports 30+ embedding models out of the box in the default ModelCatalog, with easy extensibility to add other - [GGUF](components-gguf.md): llmware packages its own build of the llama.cpp backend engine to enable running quantized models in GGUF format, whi... - [Library](components-library.md): Library is the main organizing construct for unstructured information in LLMWare. Users can create one large librar... - [Model Catalog](components-model-catalog.md): Access all models the same way with easy lookup, regardless of underlying implementation. - [Prompt with Sources](components-prompt-with-sources.md): Prompt with Sources: the easiest way to combine knowledge retrieval with a LLM inference, and provides several high-l... - [Query](components-query.md): Query libraries with mix of text, semantic, hybrid, metadata, and custom filters. The retrieval.py module implements... - [RAG Optimized Models](components-rag-optimized-models.md): RAG-Optimized Models - 1-7B parameter models designed for RAG workflow integration and running locally. - [Release History](components-release-history.md): Release History - [SLIM Models](components-slim-models.md): Generally, function-calling is a specialized capability of frontier language models, such as OpenAI GPT4. - [Vector Databases](components-vector-databases.md): llmware supports the following vector databases: - [Whisper CPP](components-whisper-cpp.md): llmware has an integrated WhisperCPP backend which enables fast, easy local voice-to-text processing. - [Code contributions](contributing-code.md): One way to contribute to``llmware``is by contributing to the code base. - [Contributing](contributing-contributing.md): {: .note} - [Documentation contributions](contributing-documentation.md): One way to contribute to``llmware``is by contributing documentation. - [Agents](examples-agents.md): 🚀 Start Building Multi-Model Agents Locally on a Laptop 🚀 - [Datasets](examples-datasets.md): llmware provides powerful capabilities to transform raw unstructured information into various model-ready datasets. - [Embedding](examples-embedding.md): We introduce``llmware``through self-contained examples. - [Examples](examples-examples.md): llmware offers a wide range of examples to cover the lifecycle of building RAG and Agent based applications using - [Introduction by Examples](examples-getting-started.md): We introduce``llmware``through self-contained examples. - [Models](examples-models.md): We introduce``llmware``through self-contained examples. - [Notebooks](examples-notebooks.md): We introduce``llmware``through self-contained examples. - [Parsing](examples-parsing.md): We introduce``llmware``through self-contained examples. - [Prompts](examples-prompts.md): We introduce``llmware``through self-contained examples. - [Retrieval](examples-retrieval.md): We introduce``llmware``through self-contained examples. - [Structured Tables](examples-structured-tables.md): We introduce``llmware``through self-contained examples. - [UI](examples-ui.md): We introduce``llmware``through self-contained examples. - [Use Cases](examples-use-cases.md): 🚀 Use Cases Examples 🚀 - [Clone Repo](getting-started-clone-repo.md): The llmware repo can be pulled locally to get access to all the examples, or to work directly with the latest version... - [Fast Start](getting-started-fast-start.md): Fast Start: Learning RAG with llmware through 6 examples - [Getting Started](getting-started-getting-started.md): From quickly building POCs to scalable LLM Apps for the enterprise, LLMWare is packed with all the tools you need. - [Installation](getting-started-installation.md): Set up - [Overview](getting-started-overview.md): `llmware`provides a unified framework for building LLM-based applications (e.g, RAG, Agents), using small, specializ... - [Platforms Supported](getting-started-platforms.md): ___ - [Working with Docker](getting-started-working-with-docker.md): This section is a short guide on setting up a Linux environment with Docker and running LLMWare examples with differe... - [Home | llmware](index.md): From quickly building POCs to scalable LLM Apps for the enterprise, LLMWare is packed with all the tools you need. - [Advanced RAG](learn-advanced-techniques-for-rag.md): llmware Youtube Video Channel - [Core RAG Scenarios Running Locally](learn-core-rag-scenarios-running-locally.md): Core RAG Scenarios Run Locally - [Voice Transcription with Whisper CPP](learn-integrated-voice-transcription-with-whisper-cpp.md): Integrated Voice Transcription with Whisper CPP - [Learn](learn-learn.md): Learn: Youtube Video Series - [Other Topics](learn-other-topics.md): Other Notable Videos and Topics - [Parsing Embedding and Data Extraction](learn-parsing-embedding-data-extraction.md): Parsing, Embedding, and Data Extraction - [Using Agents & Function Calls with SLIM Models](learn-using-agents-functions-slim-models.md): Using Agents, Function Calls and SLIM Models - [Using Quantized GGUF Models](learn-using-quantized-gguf-models.md): Using Quantized GGUF Models