# Mlx Lm > You can use use the`mlx-lm`package to fine-tune an LLM with low rank ## Pages - [Fine-Tuning with LoRA or QLoRA](lora.md): You can use use the`mlx-lm`package to fine-tune an LLM with low rank - [Api_Benchmark](api-benchmark.md): def setup_arg_parser(): - [Api_Cache_Prompt](api-cache-prompt.md): def setup_arg_parser(): - [Api_Chat](api-chat.md): def setup_arg_parser(): - [Api_Convert](api-convert.md): def mixed_quant_predicate_builder( - [Api_Evaluate](api-evaluate.md): """ - [Api_Fuse](api-fuse.md): def parse_arguments() -> argparse.Namespace: - [Api_Generate](api-generate.md): def str2bool(string): - [Api_Gguf](api-gguf.md): class TokenType(IntEnum): - [Api_Lora](api-lora.md): """^(?: - [Api_Manage](api-manage.md): def tabulate(rows: List[List[Union[str, int]]], headers: List[str]) -> str: - [Api_Perplexity](api-perplexity.md): """ - [Api_Sample_Utils](api-sample-utils.md): def make_sampler( - [Api_Server](api-server.md): def get_system_fingerprint(): - [Api_Tokenizer_Utils](api-tokenizer-utils.md): class StreamingDetokenizer: - [Api_Upload](api-upload.md): def main(): - [Api_Utils](api-utils.md): def _unpack_awq_weights(qweight: mx.array) -> mx.array: