# Raspberrypi > *Note: This file could not be automatically converted from AsciiDoc.* --- # Source: about.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[ai-camera]] == About The Raspberry Pi AI Camera uses the Sony IMX500 imaging sensor to provide low-latency, high-performance AI capabilities to any camera application. Tight integration with xref:../computers/camera_software.adoc[Raspberry Pi's camera software stack] allows users to deploy their own neural network models with minimal effort. image::images/ai-camera.jpg[The Raspberry Pi AI Camera] This section demonstrates how to run either a pre-packaged or custom neural network model on the camera. Additionally, this section includes the steps required to interpret inference data generated by neural networks running on the IMX500 in https://github.com/raspberrypi/rpicam-apps[`rpicam-apps`] and https://github.com/raspberrypi/picamera2[Picamera2]. --- # Source: details.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Under the hood === Overview The Raspberry Pi AI Camera works differently from traditional AI-based camera image processing systems, as shown in the diagram below: image::images/imx500-comparison.svg[Traditional versus IMX500 AI camera systems] The left side demonstrates the architecture of a traditional AI camera system. In such a system, the camera delivers images to the Raspberry Pi. The Raspberry Pi processes the images and then performs AI inference. Traditional systems may use external AI accelerators (as shown) or rely exclusively on the CPU. The right side demonstrates the architecture of a system that uses IMX500. The camera module contains a small Image Signal Processor (ISP) which turns the raw camera image data into an **input tensor**. The camera module sends this tensor directly into the AI accelerator within the camera, which produces **output tensors** that contain the inferencing results. The AI accelerator sends these tensors to the Raspberry Pi. There is no need for an external accelerator, nor for the Raspberry Pi to run neural network software on the CPU. To fully understand this system, familiarise yourself with the following concepts: Input Tensor:: The part of the sensor image passed to the AI engine for inferencing. Produced by a small on-board ISP which also crops and scales the camera image to the dimensions expected by the neural network that has been loaded. The input tensor is not normally made available to applications, though it is possible to access it for debugging purposes. Region of Interest (ROI):: Specifies exactly which part of the sensor image is cropped out before being rescaled to the size demanded by the neural network. Can be queried and set by an application. The units used are always pixels in the full resolution sensor output. The default ROI setting uses the full image received from the sensor, cropping no data. Output Tensors:: The results of inferencing performed by the neural network. The precise number and shape of the outputs depend on the neural network. Application code must understand how to handle the tensors. === System architecture The diagram below shows the various camera software components (in green) used during our imaging/inference use case with the Raspberry Pi AI Camera module hardware (in red): image::images/imx500-block-diagram.svg[IMX500 block diagram] At startup, the IMX500 sensor module loads firmware to run a particular neural network model. During streaming, the IMX500 generates _both_ an image stream and an inference stream. This inference stream holds the inputs and outputs of the neural network model, also known as input/output **tensors**. === Device drivers At the lowest level, the the IMX500 sensor kernel driver configures the camera module over the I2C bus. The CSI2 driver (`CFE` on Pi 5, `Unicam` on all other Pi platforms) sets up the receiver to write the image data stream into a frame buffer, together with the embedded data and inference data streams into another buffer in memory. The firmware files also transfer over the I2C bus wires. On most devices, this uses the standard I2C protocol, but Raspberry Pi 5 uses a custom high speed protocol. The RP2040 SPI driver in the kernel handles firmware file transfer, since the transfer uses the RP2040 microcontroller. The microcontroller bridges the I2C transfers from the kernel to the IMX500 via a SPI bus. Additionally, the RP2040 caches firmware files in on-board storage. This avoids the need to transfer entire firmware blobs over the I2C bus, significantly speeding up firmware loading for firmware you've already used. === `libcamera` Once `libcamera` dequeues the image and inference data buffers from the kernel, the IMX500 specific `cam-helper` library (part of the Raspberry Pi IPA within `libcamera`) parses the inference buffer to access the input/output tensors. These tensors are packaged as Raspberry Pi vendor-specific https://libcamera.org/api-html/namespacelibcamera_1_1controls.html[`libcamera` controls]. `libcamera` returns the following controls: [%header,cols="a,a"] |=== | Control | Description | `CnnOutputTensor` | Floating point array storing the output tensors. | `CnnInputTensor` | Floating point array storing the input tensor. | `CnnOutputTensorInfo` | Network specific parameters describing the output tensors' structure: [source,c] ---- struct OutputTensorInfo { uint32_t tensorDataNum; uint32_t numDimensions; uint16_t size[MaxNumDimensions]; }; struct CnnOutputTensorInfo { char networkName[NetworkNameLen]; uint32_t numTensors; OutputTensorInfo info[MaxNumTensors]; }; ---- | `CnnInputTensorInfo` | Network specific parameters describing the input tensor's structure: [source,c] ---- struct CnnInputTensorInfo { char networkName[NetworkNameLen]; uint32_t width; uint32_t height; uint32_t numChannels; }; ---- |=== === `rpicam-apps` `rpicam-apps` provides an IMX500 post-processing stage base class that implements helpers for IMX500 post-processing stages: https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/imx500/imx500_post_processing_stage.hpp[`IMX500PostProcessingStage`]. Use this base class to derive a new post-processing stage for any neural network model running on the IMX500. For an example, see https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/imx500/imx500_object_detection.cpp[`imx500_object_detection.cpp`]: [source,cpp] ---- class ObjectDetection : public IMX500PostProcessingStage { public: ObjectDetection(RPiCamApp *app) : IMX500PostProcessingStage(app) {} char const *Name() const override; void Read(boost::property_tree::ptree const ¶ms) override; void Configure() override; bool Process(CompletedRequestPtr &completed_request) override; }; ---- For every frame received by the application, the `Process()` function is called (`ObjectDetection::Process()` in the above case). In this function, you can extract the output tensor for further processing or analysis: [source,cpp] ---- auto output = completed_request->metadata.get(controls::rpi::CnnOutputTensor); if (!output) { LOG_ERROR("No output tensor found in metadata!"); return false; } std::vector output_tensor(output->data(), output->data() + output->size()); ---- Once completed, the final results can either be visualised or saved in metadata and consumed by either another downstream stage, or the top level application itself. In the object inference case: [source,cpp] ---- if (objects.size()) completed_request->post_process_metadata.Set("object_detect.results", objects); ---- The `object_detect_draw_cv` post-processing stage running downstream fetches these results from the metadata and draws the bounding boxes onto the image in the `ObjectDetectDrawCvStage::Process()` function: [source,cpp] ---- std::vector detections; completed_request->post_process_metadata.Get("object_detect.results", detections); ---- The following table contains a full list of helper functions provided by `IMX500PostProcessingStage`: [%header,cols="a,a"] |=== | Function | Description | `Read()` | Typically called from `::Read()`, this function reads the config parameters for input tensor parsing and saving. This function also reads the neural network model file string (`"network_file"`) and sets up the firmware to be loaded on camera open. | `Process()` | Typically called from `::Process()` this function processes and saves the input tensor to a file if required by the JSON config file. | `SetInferenceRoiAbs()` | Sets an absolute region of interest (ROI) crop rectangle on the sensor image to use for inferencing on the IMX500. | `SetInferenceRoiAuto()` | Automatically calculates region of interest (ROI) crop rectangle on the sensor image to preserve the input tensor aspect ratio for a given neural network. | `ShowFwProgressBar()` | Displays a progress bar on the console showing the progress of the neural network firmware upload to the IMX500. | `ConvertInferenceCoordinates()` | Converts from the input tensor coordinate space to the final ISP output image space. There are a number of scaling/cropping/translation operations occurring from the original sensor image to the fully processed ISP output image. This function converts coordinates provided by the output tensor to the equivalent coordinates after performing these operations. |=== === Picamera2 IMX500 integration in Picamera2 is very similar to what is available in `rpicam-apps`. Picamera2 has an IMX500 helper class that provides the same functionality as the `rpicam-apps` `IMX500PostProcessingStage` base class. This can be imported to any Python script with: [source,python] ---- from picamera2.devices.imx500 import IMX500 # This must be called before instantiation of Picamera2 imx500 = IMX500(model_file) ---- To retrieve the output tensors, fetch them from the controls. You can then apply additional processing in your Python script. For example, in an object inference use case such as https://github.com/raspberrypi/picamera2/tree/main/examples/imx500/imx500_object_detection_demo.py[imx500_object_detection_demo.py], the object bounding boxes and confidence values are extracted in `parse_detections()` and draw the boxes on the image in `draw_detections()`: [source,python] ---- class Detection: def __init__(self, coords, category, conf, metadata): """Create a Detection object, recording the bounding box, category and confidence.""" self.category = category self.conf = conf obj_scaled = imx500.convert_inference_coords(coords, metadata, picam2) self.box = (obj_scaled.x, obj_scaled.y, obj_scaled.width, obj_scaled.height) def draw_detections(request, detections, stream="main"): """Draw the detections for this request onto the ISP output.""" labels = get_labels() with MappedArray(request, stream) as m: for detection in detections: x, y, w, h = detection.box label = f"{labels[int(detection.category)]} ({detection.conf:.2f})" cv2.putText(m.array, label, (x + 5, y + 15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1) cv2.rectangle(m.array, (x, y), (x + w, y + h), (0, 0, 255, 0)) if args.preserve_aspect_ratio: b = imx500.get_roi_scaled(request) cv2.putText(m.array, "ROI", (b.x + 5, b.y + 15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1) cv2.rectangle(m.array, (b.x, b.y), (b.x + b.width, b.y + b.height), (255, 0, 0, 0)) def parse_detections(request, stream='main'): """Parse the output tensor into a number of detected objects, scaled to the ISP output.""" outputs = imx500.get_outputs(request.get_metadata()) boxes, scores, classes = outputs[0][0], outputs[1][0], outputs[2][0] detections = [ Detection(box, category, score, metadata) for box, score, category in zip(boxes, scores, classes) if score > threshold] draw_detections(request, detections, stream) ---- Unlike the `rpicam-apps` example, this example applies no additional hysteresis or temporal filtering. The IMX500 class in Picamera2 provides the following helper functions: [%header,cols="a,a"] |=== | Function | Description | `IMX500.get_full_sensor_resolution()` | Return the full sensor resolution of the IMX500. | `IMX500.config` | Returns a dictionary of the neural network configuration. | `IMX500.convert_inference_coords(coords, metadata, picamera2)` | Converts the coordinates _coords_ from the input tensor coordinate space to the final ISP output image space. Must be passed Picamera2's image metadata for the image, and the Picamera2 object. There are a number of scaling/cropping/translation operations occurring from the original sensor image to the fully processed ISP output image. This function converts coordinates provided by the output tensor to the equivalent coordinates after performing these operations. | `IMX500.show_network_fw_progress_bar()` | Displays a progress bar on the console showing the progress of the neural network firmware upload to the IMX500. | `IMX500.get_roi_scaled(request)` | Returns the region of interest (ROI) in the ISP output image coordinate space. | `IMX500.get_isp_output_size(picamera2)` | Returns the ISP output image size. | `IMX5000.get_input_size()` | Returns the input tensor size based on the neural network model used. | `IMX500.get_outputs(metadata)` | Returns the output tensors from the Picamera2 image metadata. | `IMX500.get_output_shapes(metadata)` | Returns the shape of the output tensors from the Picamera2 image metadata for the neural network model used. | `IMX500.set_inference_roi_abs(rectangle)` | Sets the region of interest (ROI) crop rectangle which determines which part of the sensor image is converted to the input tensor that is used for inferencing on the IMX500. The region of interest should be specified in units of pixels at the full sensor resolution, as a `(x_offset, y_offset, width, height)` tuple. | `IMX500.set_inference_aspect_ratio(aspect_ratio)` | Automatically calculates region of interest (ROI) crop rectangle on the sensor image to preserve the given aspect ratio. To make the ROI aspect ratio exactly match the input tensor for this network, use `imx500.set_inference_aspect_ratio(imx500.get_input_size())`. | `IMX500.get_kpi_info(metadata)` | Returns the frame-level performance indicators logged by the IMX500 for the given image metadata. |=== --- # Source: getting-started.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Getting started The instructions below describe how to run the pre-packaged MobileNet SSD and PoseNet neural network models on the Raspberry Pi AI Camera. === Hardware setup Attach the camera to your Raspberry Pi 5 board following the instructions at xref:../accessories/camera.adoc#install-a-raspberry-pi-camera[Install a Raspberry Pi Camera]. === Prerequisites These instructions assume you are using the AI Camera attached to either a Raspberry Pi 4 Model B or Raspberry Pi 5 board. With minor changes, you can follow these instructions on other Raspberry Pi models with a camera connector, including the Raspberry Pi Zero 2 W and Raspberry Pi 3 Model B+. First, ensure that your Raspberry Pi runs the latest software. Run the following command to update: [source,console] ---- $ sudo apt update && sudo apt full-upgrade ---- === Install the IMX500 firmware The AI camera must download runtime firmware onto the IMX500 sensor during startup. To install these firmware files onto your Raspberry Pi, run the following command: [source,console] ---- $ sudo apt install imx500-all ---- This command: * installs the `/lib/firmware/imx500_loader.fpk` and `/lib/firmware/imx500_firmware.fpk` firmware files required to operate the IMX500 sensor * places a number of neural network model firmware files in `/usr/share/imx500-models/` * installs the IMX500 post-processing software stages in `rpicam-apps` * installs the Sony network model packaging tools NOTE: The IMX500 kernel device driver loads all the firmware files when the camera starts. This may take several minutes if the neural network model firmware has not been previously cached. The demos below display a progress bar on the console to indicate firmware loading progress. === Reboot Now that you've installed the prerequisites, restart your Raspberry Pi: [source,console] ---- $ sudo reboot ---- == Run example applications Once all the system packages are updated and firmware files installed, we can start running some example applications. As mentioned earlier, the Raspberry Pi AI Camera integrates fully with `libcamera`, `rpicam-apps`, and `Picamera2`. === `rpicam-apps` The xref:../computers/camera_software.adoc#rpicam-apps[`rpicam-apps` camera applications] include IMX500 object detection and pose estimation stages that can be run in the post-processing pipeline. For more information about the post-processing pipeline, see xref:../computers/camera_software.adoc#post-process-file[the post-processing documentation]. The examples on this page use post-processing JSON files located in `/usr/share/rpi-camera-assets/`. ==== Object detection The MobileNet SSD neural network performs basic object detection, providing bounding boxes and confidence values for each object found. `imx500_mobilenet_ssd.json` contains the configuration parameters for the IMX500 object detection post-processing stage using the MobileNet SSD neural network. `imx500_mobilenet_ssd.json` declares a post-processing pipeline that contains two stages: . `imx500_object_detection`, which picks out bounding boxes and confidence values generated by the neural network in the output tensor . `object_detect_draw_cv`, which draws bounding boxes and labels on the image The MobileNet SSD tensor requires no significant post-processing on your Raspberry Pi to generate the final output of bounding boxes. All object detection runs directly on the AI Camera. The following command runs `rpicam-hello` with object detection post-processing: [source,console] ---- $ rpicam-hello -t 0s --post-process-file /usr/share/rpi-camera-assets/imx500_mobilenet_ssd.json --viewfinder-width 1920 --viewfinder-height 1080 --framerate 30 ---- After running the command, you should see a viewfinder that overlays bounding boxes on objects recognised by the neural network: image::images/imx500-mobilenet.jpg[IMX500 MobileNet] To record video with object detection overlays, use `rpicam-vid` instead: [source,console] ---- $ rpicam-vid -t 10s -o output.264 --post-process-file /usr/share/rpi-camera-assets/imx500_mobilenet_ssd.json --width 1920 --height 1080 --framerate 30 ---- You can configure the `imx500_object_detection` stage in many ways. For example, `max_detections` defines the maximum number of objects that the pipeline will detect at any given time. `threshold` defines the minimum confidence value required for the pipeline to consider any input as an object. The raw inference output data of this network can be quite noisy, so this stage also performs some temporal filtering and applies hysteresis. To disable this filtering, remove the `temporal_filter` config block. ==== Pose estimation The PoseNet neural network performs pose estimation, labelling key points on the body associated with joints and limbs. `imx500_posenet.json` contains the configuration parameters for the IMX500 pose estimation post-processing stage using the PoseNet neural network. `imx500_posenet.json` declares a post-processing pipeline that contains two stages: * `imx500_posenet`, which fetches the raw output tensor from the PoseNet neural network * `plot_pose_cv`, which draws line overlays on the image The AI Camera performs basic detection, but the output tensor requires additional post-processing on your host Raspberry Pi to produce final output. The following command runs `rpicam-hello` with pose estimation post-processing: [source,console] ---- $ rpicam-hello -t 0s --post-process-file /usr/share/rpi-camera-assets/imx500_posenet.json --viewfinder-width 1920 --viewfinder-height 1080 --framerate 30 ---- image::images/imx500-posenet.jpg[IMX500 PoseNet] You can configure the `imx500_posenet` stage in many ways. For example, `max_detections` defines the maximum number of bodies that the pipeline will detect at any given time. `threshold` defines the minimum confidence value required for the pipeline to consider input as a body. === Picamera2 For examples of image classification, object detection, object segmentation, and pose estimation using Picamera2, see https://github.com/raspberrypi/picamera2/blob/main/examples/imx500/[the `picamera2` GitHub repository]. Most of the examples use OpenCV for some additional processing. To install the dependencies required to run OpenCV, run the following command: [source,console] ---- $ sudo apt install python3-opencv python3-munkres ---- Now download the https://github.com/raspberrypi/picamera2[the `picamera2` repository] to your Raspberry Pi to run the examples. You'll find example files in the root directory, with additional information in the `README.md` file. Run the following script from the repository to run YOLOv8 object detection: [source,console] ---- $ python imx500_object_detection_demo.py --model /usr/share/imx500-models/imx500_network_ssd_mobilenetv2_fpnlite_320x320_pp.rpk ---- To try pose estimation in Picamera2, run the following script from the repository: [source,console] ---- $ python imx500_pose_estimation_higherhrnet_demo.py ---- --- # Source: model-conversion.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Model deployment To deploy a new neural network model to the Raspberry Pi AI Camera, complete the following steps: . Provide a floating-point neural network model (PyTorch or TensorFlow). . Run the model through Edge-MDT (Edge AI Model Development Toolkit). .. *Quantise* and compress the model so that it can run using the resources available on the IMX500 camera module. .. *Convert* the compressed model to IMX500 format. . Package the model into a firmware file that can be loaded at runtime onto the camera. The first two steps will normally be performed on a more powerful computer such as a desktop or server. You must run the final packaging step on a Raspberry Pi. === Model creation The creation of neural network models is beyond the scope of this guide. Existing models can be re-used, or new ones created using popular AI frameworks like TensorFlow or PyTorch. For more information, see the official https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera[AITRIOS developer website]. === Model compression and conversion ==== Edge-MDT installation The Edge-MDT (Model Development Toolkit) software package installs all the tools required to quantise, compress, and convert models to run on your IMX500 device. The Edge-MDT package takes a parameter to select between installing the PyTorch or TensorFlow version of the tools. [tabs] ====== PyTorch:: + [source,console] ---- $ pip install edge-mdt[pt] ---- TensorFlow:: + [source,console] ---- $ pip install edge-mdt[tf] ---- + TIP: Always use the same version of TensorFlow you used to compress your model. ====== If you need to install both packages, use two separate Python virtual environments. This prevents TensorFlow and PyTorch from causing conflicts with each other. ==== Model Optimization Models are quantised and compressed using Sony's Model Compression Toolkit (MCT). This tool is automatically installed as part of the Edge-MDT installation step. For more information, see the https://github.com/sony/model_optimization[Sony model optimization GitHub repository]. The Model Compression Toolkit generates a quantised model in the following formats: * Keras (TensorFlow) * ONNX (PyTorch) === Conversion The converter is a command line application that compiles the quantised model (in `.onnx` or `.keras` formats) into a binary file that can be packaged and loaded onto the AI Camera. This tool is automatically installed as part of the Edge-MDT installation step. To convert a model model: [tabs] ====== PyTorch:: + [source,console] ---- $ imxconv-pt -i -o ---- TensorFlow:: + [source,console] ---- $ imxconv-tf -i -o ---- ====== IMPORTANT: For optimal use of the memory available to the accelerator on the IMX500 sensor, add `--no-input-persistency` to the above commands. However, this will disable input tensor generation that may be used for debugging purposes. Both commands create an output folder that contains a memory usage report and a `packerOut.zip` file. For more information on the model conversion process, see the official https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera/documentation/imx500-converter[Sony IMX500 Converter documentation]. === Packaging IMPORTANT: You must run this step on a Raspberry Pi. The final step packages the model into an RPK file. When running the neural network model, we'll upload this file to the AI Camera. Before proceeding, run the following command to install the necessary tools: [source,console] ---- $ sudo apt install imx500-tools ---- To package the model into an RPK file, run the following command: [source,console] ---- $ imx500-package -i -o ---- This command should create a file named `network.rpk` in the output folder. You'll pass the name of this file to your IMX500 camera applications. For a more comprehensive set of instructions and further specifics on the tools used, see the https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera/documentation/imx500-packager[Sony IMX500 Packager documentation]. --- # Source: ai-camera.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::ai-camera/about.adoc[] include::ai-camera/getting-started.adoc[] include::ai-camera/details.adoc[] include::ai-camera/model-conversion.adoc[] --- # Source: about.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[ai-hat-plus]] == About .The 26 tera-operations per second (TOPS) Raspberry Pi AI HAT+ image::images/ai-hat-plus-hero.jpg[width="80%"] The Raspberry Pi AI HAT+ add-on board has a built-in Hailo AI accelerator compatible with Raspberry Pi 5. The NPU in the AI HAT+ can be used for applications including process control, security, home automation, and robotics. The AI HAT+ is available in 13 and 26 tera-operations per second (TOPS) variants, built around the Hailo-8L and Hailo-8 neural network inference accelerators. The 13 TOPS variant works best with moderate workloads, with performance similar to the xref:ai-kit.adoc[AI Kit]. The 26 TOPS variant can run larger networks, can run networks faster, and can more effectively run multiple networks simultaneously. The AI HAT+ communicates using Raspberry Pi 5's PCIe interface. The host Raspberry Pi 5 automatically detects the on-board Hailo accelerator and uses the NPU for supported AI computing tasks. Raspberry Pi OS's built-in `rpicam-apps` camera applications automatically use the NPU to run compatible post-processing tasks. [[ai-hat-plus-installation]] == Install To use the AI HAT+, you will need: * a Raspberry Pi 5 Each AI HAT+ comes with a ribbon cable, GPIO stacking header, and mounting hardware. Complete the following instructions to install your AI HAT+: . First, ensure that your Raspberry Pi runs the latest software. Run the following command to update: + [source,console] ---- $ sudo apt update && sudo apt full-upgrade ---- . Next, xref:../computers/raspberry-pi.adoc#update-the-bootloader-configuration[ensure that your Raspberry Pi firmware is up-to-date]. Run the following command to see what firmware you're running: + [source,console] ---- $ sudo rpi-eeprom-update ---- + If you see 6 December 2023 or a later date, proceed to the next step. If you see a date earlier than 6 December 2023, run the following command to open the Raspberry Pi Configuration CLI: + [source,console] ---- $ sudo raspi-config ---- + Under `Advanced Options` > `Bootloader Version`, choose `Latest`. Then, exit `raspi-config` with `Finish` or the *Escape* key. + Run the following command to update your firmware to the latest version: + [source,console] ---- $ sudo rpi-eeprom-update -a ---- + Then, reboot with `sudo reboot`. . Disconnect the Raspberry Pi from power before beginning installation. . For the best performance, we recommend using the AI HAT+ with the Raspberry Pi Active Cooler. If you have an Active Cooler, install it before installing the AI HAT+. + -- image::images/ai-hat-plus-installation-01.png[width="60%"] -- . Install the spacers using four of the provided screws. Firmly press the GPIO stacking header on top of the Raspberry Pi GPIO pins; orientation does not matter as long as all pins fit into place. Disconnect the ribbon cable from the AI HAT+, and insert the other end into the PCIe port of your Raspberry Pi. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing inward, towards the USB ports. With the ribbon cable fully and evenly inserted into the PCIe port, push the cable holder down from both sides to secure the ribbon cable firmly in place. + -- image::images/ai-hat-plus-installation-02.png[width="60%"] -- . Set the AI HAT+ on top of the spacers, and use the four remaining screws to secure it in place. . Insert the ribbon cable into the slot on the AI HAT+. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing up. With the ribbon cable fully and evenly inserted into the port, push the cable holder down from both sides to secure the ribbon cable firmly in place. . Congratulations, you have successfully installed the AI HAT+. Connect your Raspberry Pi to power; Raspberry Pi OS will automatically detect the AI HAT+. == Get started with AI on your Raspberry Pi To start running AI accelerated applications on your Raspberry Pi, check out our xref:../computers/ai.adoc[Getting Started with the AI Kit and AI HAT+] guide. --- # Source: ai-hat-plus.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::ai-hat-plus/about.adoc[] == Product brief For more information about the AI HAT+, including mechanical specifications and operating environment limitations, see the https://datasheets.raspberrypi.com/ai-hat-plus/raspberry-pi-ai-hat-plus-product-brief.pdf[product brief]. --- # Source: about.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[ai-kit]] == About .The Raspberry Pi AI Kit image::images/ai-kit.jpg[width="80%"] The Raspberry Pi AI Kit bundles the xref:m2-hat-plus.adoc#m2-hat-plus[Raspberry Pi M.2 HAT+] with a Hailo AI acceleration module for use with Raspberry Pi 5. The kit contains the following: * Hailo AI module containing a Neural Processing Unit (NPU) * Raspberry Pi M.2 HAT+, to connect the AI module to your Raspberry Pi 5 * thermal pad pre-fitted between the module and the M.2 HAT+ * mounting hardware kit * 16 mm stacking GPIO header == AI module features * 13 tera-operations per second (TOPS) neural network inference accelerator built around the Hailo-8L chip. * M.2 2242 form factor [[ai-kit-installation]] == Install To use the AI Kit, you will need: * a Raspberry Pi 5 Each AI Kit comes with a pre-installed AI module, ribbon cable, GPIO stacking header, and mounting hardware. Complete the following instructions to install your AI Kit: . First, ensure that your Raspberry Pi runs the latest software. Run the following command to update: + [source,console] ---- $ sudo apt update && sudo apt full-upgrade ---- . Next, xref:../computers/raspberry-pi.adoc#update-the-bootloader-configuration[ensure that your Raspberry Pi firmware is up-to-date]. Run the following command to see what firmware you're running: + [source,console] ---- $ sudo rpi-eeprom-update ---- + If you see 6 December 2023 or a later date, proceed to the next step. If you see a date earlier than 6 December 2023, run the following command to open the Raspberry Pi Configuration CLI: + [source,console] ---- $ sudo raspi-config ---- + Under `Advanced Options` > `Bootloader Version`, choose `Latest`. Then, exit `raspi-config` with `Finish` or the *Escape* key. + Run the following command to update your firmware to the latest version: + [source,console] ---- $ sudo rpi-eeprom-update -a ---- + Then, reboot with `sudo reboot`. . Disconnect the Raspberry Pi from power before beginning installation. . For the best performance, we recommend using the AI Kit with the Raspberry Pi Active Cooler. If you have an Active Cooler, install it before installing the AI Kit. + -- image::images/ai-kit-installation-01.png[width="60%"] -- . Install the spacers using four of the provided screws. Firmly press the GPIO stacking header on top of the Raspberry Pi GPIO pins; orientation does not matter as long as all pins fit into place. Disconnect the ribbon cable from the AI Kit, and insert the other end into the PCIe port of your Raspberry Pi. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing inward, towards the USB ports. With the ribbon cable fully and evenly inserted into the PCIe port, push the cable holder down from both sides to secure the ribbon cable firmly in place. + -- image::images/ai-kit-installation-02.png[width="60%"] -- . Set the AI Kit on top of the spacers, and use the four remaining screws to secure it in place. + -- image::images/ai-kit-installation-03.png[width="60%"] -- . Insert the ribbon cable into the slot on the AI Kit. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing up. With the ribbon cable fully and evenly inserted into the port, push the cable holder down from both sides to secure the ribbon cable firmly in place. + -- image::images/ai-kit-installation-04.png[width="60%"] -- . Congratulations, you have successfully installed the AI Kit. Connect your Raspberry Pi to power; Raspberry Pi OS will automatically detect the AI Kit. + -- image::images/ai-kit-installation-05.png[width="60%"] -- WARNING: Always disconnect your Raspberry Pi from power before connecting or disconnecting a device from the M.2 slot. == Get started with AI on your Raspberry Pi To start running AI accelerated applications on your Raspberry Pi, check out our xref:../computers/ai.adoc[Getting Started with the AI Kit and AI HAT+] guide. --- # Source: ai-kit.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::ai-kit/about.adoc[] == Product brief For more information about the AI Kit, including mechanical specifications and operating environment limitations, see the https://datasheets.raspberrypi.com/ai-kit/raspberry-pi-ai-kit-product-brief.pdf[product brief]. --- # Source: codec_zero.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Raspberry Pi Codec Zero Raspberry Pi Codec Zero is a Raspberry Pi Zero-sized audio HAT. It delivers bi-directional digital audio signals (I2S) between a Raspberry Pi and the Codec Zero's on-board Dialog Semiconductor DA7212 codec. The Codec Zero supports a range of input and output devices. * High performance 24-bit audio codec * Supports common audio sample rates between 8-96 kHz * Built in micro-electro-mechanical (MEMS) microphone (Mic2) * Mono electret microphone (Mic2 left) * Automatic MEMS disabling on Mic2 insert detect * Supports additional (no fit) mono electret microphone (Mic1 right) * Stereo auxiliary input channel (AUX IN) - PHONO/RCA connectors * Stereo auxiliary output channel (Headphone/AUX OUT) * Flexible analogue and digital mixing paths * Digital signal processors (DSP) for automatic level control (ALC) * Five-band EQ * Mono line-out/mini speaker driver: 1.2 W at 5 V, THD < 10%, R = 8 Ω image::images/Codec_Zero_Board_Diagram.jpg[width="80%"] The Codec Zero includes an EEPROM which can be used for auto-configuration of the Linux environment if necessary. It has an integrated MEMS microphone, and can be used with stereo microphone input via a 3.5 mm socket and a mono speaker (1.2W/8Ω). In addition to the green (GPIO23) and red (GPIO24) LEDs, a tactile programmable button (GPIO27) is also provided. ==== Pinouts [cols="1,12"] |=== | *P1/2* | Support external PHONO/RCA sockets if needed. P1: AUX IN, P2: AUX OUT. | *P1* | Pin 1 is square. |=== image::images/CODEC_ZERO_ZOOMED_IN_DIAGRAM.jpg[width="50%"] Codec Zero is an ideal design starting point for small-scale projects such as walkie-talkies, smart doorbells, vintage radio hacks, or smart speakers. --- # Source: configuration.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Configuration A pre-programmed EEPROM is included on all Raspberry Pi audio boards. Raspberry Pi audio boards are designed to be plug-and-play; Raspberry Pi OS is able to automatically detect and configure itself. In Raspberry Pi OS, right-clicking on the audio settings in the top right-hand corner of your screen will allow you to switch between the on-board audio settings and the HAT audio settings: image::images/gui.png[] There are a number of third-party audio software applications available for Raspberry Pi that will support the plug-and-play feature of our audio boards. Often these are used headless. They can be controlled via a PC or Mac application, or by a web server installed on Raspberry Pi, with interaction through a webpage. If you need to configure Raspberry Pi OS yourself, perhaps if you're running a headless system of your own and don't have the option of control via the GUI, you will need to make your Raspberry Pi audio board the primary audio device in Raspberry Pi OS, disabling the Raspberry Pi's on-board audio device. This is done by editing the xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] file. Using a Terminal session connected to your Raspberry Pi via SSH, run the following command to edit the file: [source,console] ---- $ sudo nano /boot/firmware/config.txt ---- Find the `dtparam=audio=on` line in the file and comment it out by placing a # symbol at the start of the line. Anything written after the # symbol in any given line will be disregarded by the program. Your `/boot/firmware/config.txt` file should now contain the following entry: [source,ini] ---- #dtparam=audio=on ---- Press `Ctrl+X`, then the `Y` key, then *Enter* to save. Finally, reboot your Raspberry Pi in order for the settings to take effect. [source,console] ---- $ sudo reboot ---- Alternatively, the `/boot/firmware/config.txt` file can be edited directly onto the Raspberry Pi's microSD card, inserted into your usual computer. Using the default file manager, open the `/boot/firmware/` volume on the card and edit the `config.txt` file using an appropriate text editor, then save the file, eject the microSD card and reinsert it back into your Raspberry Pi. === Attach the HAT The Raspberry Pi audio boards attach to the Raspberry Pi's 40-pin header. They are designed to be supported on the Raspberry Pi using the supplied circuit board standoffs and screws. No soldering is required on the Raspberry Pi audio boards for normal operation unless you are using hardwired connections for specific connectors such as XLR (External Line Return) connections on the DAC Pro. All the necessary mounting hardware including spacers, screws and connectors is provided. The PCB spacers should be screwed, finger-tight only, to the Raspberry Pi before adding the audio board. The remaining screws should then be screwed into the spacers from above. === Hardware versions There are multiple versions of the audio cards. Your specific version determines the actions required for configuration. Older, IQaudIO-branded boards have a black PCB. Newer Raspberry Pi-branded boards have a green PCB. These boards are electrically equivalent, but have different EEPROM contents. After attaching the HAT and applying power, check that the power LED on your audio card is illuminated, if it has one. For example, the Codec Zero has an LED marked `PWR`. After establishing the card has power, use the following command to check the version of your board: [source,console] ---- $ grep -a . /proc/device-tree/hat/* ---- If the vendor string says "Raspberry Pi Ltd." then no further action is needed (but see below for the extra Codec Zero configuration). If it says "IQaudIO Limited www.iqaudio.com" then you will need the additional config.txt settings outlined below. If it says "No such file or directory" then the HAT is not being detected, but these config.txt settings may still make it work. [source,ini] ---- # Some magic to prevent the normal HAT overlay from being loaded dtoverlay= # And then choose one of the following, according to the model: dtoverlay=rpi-codeczero dtoverlay=rpi-dacplus dtoverlay=rpi-dacpro dtoverlay=rpi-digiampplus ---- === Extra Codec Zero configuration The Raspberry Pi Codec Zero board uses the Dialog Semiconductor DA7212 codec. This allows the recording of audio from the built-in MEMS microphone, from stereo phono sockets (AUX IN) or two mono external electret microphones. Playback is through stereo phono sockets (AUX OUT) or a mono speaker connector. Each input and output device has its own mixer, allowing the audio levels and volume to be adjusted independently. Within the codec itself, other mixers and switches exist to allow the output to be mixed to a single mono channel for single-speaker output. Signals may also be inverted; there is a five-band equaliser to adjust certain frequency bands. These settings can be controlled interactively, using AlsaMixer, or programmatically. Both the AUX IN and AUX OUT are 1V RMS. It may be necessary to adjust the AUX IN's mixer to ensure that the input signal doesn't saturate the ADC. Similarly, the output mixers can be to be adjusted to get the best possible output. Preconfigured scripts (loadable ALSA settings) https://github.com/raspberrypi/Pi-Codec[are available on GitHub], offering: * Mono MEMS mic recording, mono speaker playback * Mono MEMS mic recording, mono AUX OUT playback * Stereo AUX IN recording, stereo AUX OUT playback * Stereo MIC1/MIC2 recording, stereo AUX OUT playback The Codec Zero needs to know which of these input and output settings are being used each time the Raspberry Pi powers on. Using a Terminal session on your Raspberry Pi, run the following command to download the scripts: [source,console] ---- $ git clone https://github.com/raspberrypi/Pi-Codec.git ---- If git is not installed, run the following command to install it: [source,console] ---- $ sudo apt install git ---- The following command will set your device to use the on-board MEMS microphone and output for speaker playback: [source,console] ---- $ sudo alsactl restore -f /home//Pi-Codec/Codec_Zero_OnboardMIC_record_and_SPK_playback.state ---- This command may result in erroneous messages, including the following: * "failed to import hw" * "No state is present for card" In most cases, these warnings are harmless; you can safely ignore them. However, the following warnings may indicate a hardware failure: * "Remote I/O error" In Linux, the following warnings indicate that the kernel can't communicate with an I2C device: * "Remote I/O error" (`REMOTEIO`) In order for your project to operate with your required settings when it is powered on, edit the `/etc/rc.local` file. The contents of this file are run at the end of every boot process, so it is ideal for this purpose. Edit the file: [source,console] ---- $ sudo nano /etc/rc.local ---- Add the chosen script command above the exit 0 line and then *Ctrl X*, *Y* and *Enter* to save. The file should now look similar to this depending on your chosen setting: [source,bash] ---- #!/bin/sh # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. sudo alsactl restore -f /home//Pi-Codec/Codec_Zero_OnboardMIC_record_and_SPK_playback.state exit 0 ---- Press `Ctrl+X`, then the `Y` key, then *Enter* to save. Reboot for the settings to take effect: [source,console] ---- $ sudo reboot ---- If you are using your Raspberry Pi and Codec Zero in a headless environment, there is one final step required to make the Codec Zero the default audio device without access to the GUI audio settings on the desktop. We need to create a small file in your home folder: [source,console] ---- $ sudo nano .asoundrc ---- Add the following to the file: ---- pcm.!default { type hw card Zero } ---- Press `Ctrl+X`, then the `Y` key, then *Enter* to save. Reboot once more to complete the configuration: Modern Linux distributions such as Raspberry Pi OS typically use PulseAudio or PipeWire for audio control. These frameworks are capable of mixing and switching audio from multiple sources. They provide a high-level API for audio applications to use. Many audio apps use these frameworks by default. Only create `~/.asoundrc` if an audio application needs to: * communicate directly with ALSA * run in an environment where PulseAudio or PipeWire are not present This file can interfere with the UI's view of underlying audio resources. As a result, we do not recommend creating `~/.asoundrc` when running the Raspberry Pi OS desktop. The UI may automatically clean up and remove this file if it exists. [source,console] ---- $ sudo reboot ---- === Mute and unmute the DigiAMP{plus} The DigiAMP{plus} mute state is toggled by GPIO22 on Raspberry Pi. The latest audio device tree supports the unmute of the DigiAMP{plus} through additional parameters. Firstly a "one-shot" unmute when kernel module loads. For Raspberry Pi boards: [source,ini] ---- dtoverlay=rpi-digiampplus,unmute_amp ---- For IQaudIO boards: [source,ini] ---- dtoverlay=iqaudio-digiampplus,unmute_amp ---- Unmute the amp when an ALSA device is opened by a client. Mute, with a five-second delay when the ALSA device is closed. (Reopening the device within the five-second close window will cancel mute.) For Raspberry Pi boards: [source,ini] ---- dtoverlay=rpi-digiampplus,auto_mute_amp ---- For IQaudIO boards: [source,ini] ---- dtoverlay=iqaudio-digiampplus,auto_mute_amp ---- If you do not want to control the mute state through the device tree, you can also script your own solution. The amp will start up muted. To unmute the amp: [source,console] ---- $ sudo sh -c "echo 22 > /sys/class/gpio/export" $ sudo sh -c "echo out >/sys/class/gpio/gpio22/direction" $ sudo sh -c "echo 1 >/sys/class/gpio/gpio22/value" ---- To mute the amp once more: [source,console] ---- $ sudo sh -c "echo 0 >/sys/class/gpio/gpio22/value" ---- --- # Source: dac_plus.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Raspberry Pi DAC{plus} Raspberry Pi DAC{plus} is a high-resolution audio output HAT that provides 24-bit 192 kHz digital audio output. image::images/DAC+_Board_Diagram.jpg[width="80%"] A Texas Instruments PCM5122 is used in the DAC{plus} to deliver analogue audio to the phono connectors of the device. It also supports a dedicated headphone amplifier and is powered via the Raspberry Pi through the GPIO header. ==== Pinouts [cols="1,12"] |=== | *P1* | Analogue out (0-2V RMS), carries GPIO27, MUTE signal (headphone detect), left and right audio and left and right ground. | *P6* | Headphone socket signals (pin1: LEFT, 2:GROUND, 3: RIGHT, 4:GROUND, 5:DETECT). |=== --- # Source: dac_pro.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Raspberry Pi DAC Pro The Raspberry Pi DAC Pro HAT is our highest-fidelity digital to analogue converter (DAC). image::images/DAC_Pro_Board_Diagram.jpg[width="80%"] With the Texas Instruments PCM5242, the DAC Pro provides outstanding signal-to-noise ratio (SNR) and supports balanced/differential output in parallel to phono/RCA line-level output. It also includes a dedicated headphone amplifier. The DAC Pro is powered by a Raspberry Pi through the GPIO header. As part of the DAC Pro, two three-pin headers (P7/P9) are exposed above the Raspberry Pi's USB and Ethernet ports for use by the optional XLR board, allowing differential/balanced output. ==== Pinouts [cols="1,12"] |=== | *P1* | Analogue out (0-2V RMS), carries GPIO27, MUTE signal (headphone detect), left and right audio and left and right ground. | *P6* | Headphone socket signals (1: LEFT, 2: GROUND, 3: RIGHT, 4: GROUND, 5: DETECT). | *P7/9* | Differential (0-4V RMS) output (P7: LEFT, P9: RIGHT). | *P10* | Alternative 5V input, powering Raspberry Pi in parallel. |=== ==== Optional XLR Board The Pi-DAC PRO exposes a 6 pin header used by the optional XLR board to provide Differential / Balanced output exposed by XLR sockets above the Pi's USB/Ethernet ports. image::images/optional_xlr_board.jpg[width="80%"] An XLR connector is used in Studio and some hi-end hifi systems. It can also be used to drive ACTIVE "monitor" speakers as used at discos or on stage. --- # Source: digiamp_plus.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Raspberry Pi DigiAMP{plus} With Raspberry Pi DigiAMP{plus}, you can connect 2 passive stereo speakers up to 35 W with variable output, making it ideal for use in Raspberry Pi-based hi-fi systems. DigiAMP{plus} uses the Texas Instruments TAS5756M PowerDAC and must be powered from an external supply. It requires a 12-24V DC power source (the XP Power VEC65US19 power supply is recommended). image::images/DigiAMP+_Board_Diagram.jpg[width="80%"] DigiAMP{plus}'s power in barrel connector is 5.5 mm × 2.5 mm. At power-on, the amplifier is muted by default (the mute LED is illuminated). Software is responsible for the mute state and LED control (Raspberry Pi GPIO22). DigiAMP{plus} is designed to provide power to the Raspberry Pi and DigiAMP{plus} together in parallel, delivering 5.1V at 2.5amp to the Raspberry Pi through the GPIO header. WARNING: Do not apply power to the Raspberry Pi's own power input when using DigiAMP{plus}. ==== Pinouts [cols="1,12"] |=== | *P5* | Alternative power input for hard wired installations (polarity must be observed). | *P8* | TAS5756m Internal GPIO1/2/3 |=== --- # Source: getting_started.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Getting started === Create a toy chatter box As an example of what Raspberry Pi Audio Boards can do, let's walk through the creation of a toy chatter box. Its on-board microphone, programmable button and speaker driver make the Codec Zero an ideal choice for this application. image::images/Chatter_Box.jpg[width="80%"] A random pre-recorded five-second audio clip will be played when the button is pressed. After holding for ten seconds, a notifying burp sound will be emitted, after which a new five-second clip will be recorded. Holding the button down for more than 20 seconds will play a second burp sound, and then erase all previous recordings. === Hardware and wiring For this project, any small passive speaker should be sufficient. We're using one available https://shop.pimoroni.com/products/3-speaker-4-3w?variant=380549926[here], which handles 5 W of power at 4 Ω. We have also used an illuminated momentary push button, and a laser-cut box to house all the components; but both are entirely optional. This example will work just using the Codec Zero's on-board button, which is pre-wired to GPIO 27. (Alternatively, you can use any momentary push button, such as those available https://shop.pimoroni.com/products/mini-arcade-buttons?variant=40377171274[here].) image::images/Chatterbox_Labels.png[width="80%"] Use a small flat-head screwdriver to attach your speaker to the screw terminals. For the additional push button, solder the button wires directly to the Codec Zero pads as indicated, using GPIO pin 27 and Ground for the switch, and +3.3V and Ground for the LED, if necessary. === Set up your Raspberry Pi In this example, we are using Raspberry Pi OS Lite. Refer to our guide on xref:../computers/getting-started.adoc#installing-the-operating-system[installing Raspberry Pi OS] for more details. Make sure that you update your operating system before proceeding and follow the instructions provided for Codec Zero configuration, including the commands to enable the on-board microphone and speaker output. === Program your Raspberry Pi Open a shell — for instance by connecting via SSH — on your Raspberry Pi and run the following to create our Python script: [source,console] ---- $ sudo nano chatter_box.py ---- Add the following to the file, replacing `` with your username: [source,python] ---- #!/usr/bin/env python3 from gpiozero import Button from signal import pause import time import random import os from datetime import datetime # Print current date date = datetime.now().strftime("%d_%m_%Y-%H:%M:%S") print(f"{date}") # Make sure that the 'sounds' folder exists, and if it does not, create it path = '/home//sounds' isExist = os.path.exists(path) if not isExist: os.makedirs(path) print("The new directory is created!") os.system('chmod 777 -R /home//sounds') # Download a 'burp' sound if it does not already exist burp = '/home//burp.wav' isExist = os.path.exists(burp) if not isExist: os.system('wget http://rpf.io/burp -O burp.wav') print("Burp sound downloaded!") # Setup button functions - Pin 27 = Button hold time 10 seconds. button = Button(27, hold_time=10) def pressed(): global press_time press_time = time.time() print("Pressed at %s" % (press_time)); def released(): release_time = time.time() pressed_for = release_time - press_time print("Released at %s after %.2f seconds" % (release_time, pressed_for)) if pressed_for < button.hold_time: print("This is a short press") randomfile = random.choice(os.listdir("/home//sounds/")) file = '/home//sounds/' + randomfile os.system('aplay ' + file) elif pressed_for > 20: os.system('aplay ' + burp) print("Erasing all recorded sounds") os.system('rm /home//sounds/*'); def held(): print("This is a long press") os.system('aplay ' + burp) os.system('arecord --format S16_LE --duration=5 --rate 48000 -c2 /home//sounds/$(date +"%d_%m_%Y-%H_%M_%S")_voice.m4a'); button.when_pressed = pressed button.when_released = released button.when_held = held pause() ---- Press `Ctrl+X`, then the `Y` key, then *Enter* to save. To make the script executable, type the following: [source,console] ---- $ sudo chmod +x chatter_box.py ---- Next, we need to create a crontab daemon that will automatically start the script each time the device is powered on. Run the following command to open your crontab for editing: [source,console] ---- $ crontab -e ---- You will be asked to select an editor; we recommend you use `nano`. Select it by entering the corresponding number, and press Enter to continue. The following line should be added to the bottom of the file, replacing `` with your username: ---- @reboot python /home//chatter_box.py ---- Press *Ctrl X*, then *Y*, then *Enter* to save, then reboot your device with `sudo reboot`. === Use the toy chatter box The final step is to ensure that everything is operating as expected. Press the button and release it when you hear the burp. The recording will now begin for a period of five seconds. Once you have released the button, press it briefly again to hear the recording. Repeat this process as many times as you wish, and your sounds will be played at random. You can delete all recordings by pressing and holding the button, keeping the button pressed during the first burp and recording process, and releasing it after at least 20 seconds, at which point you will hear another burp sound confirming that the recordings have been deleted. video::BjXERzu8nS0[youtube,width=80%,height=400px] === Next steps Upgrades! It is always fun to upgrade a project, so why not add some additional features, such as an LED that will illuminate when recording? This project has all the parts required to make your own version of a https://aiyprojects.withgoogle.com/[Google intelligent speaker system], or you may want to consider building a second device that can be used to create a pair of walkie-talkies that are capable of transferring audio files over a network via SSH. --- # Source: hardware-info.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Hardware information Hardware information: * PCB screws are all M2.5. * PCB standoffs (for case) are 5 mm male/female. * PCB standoffs (for Raspberry Pi to audio boards) are 9 mm female/female. * PCB standoffs (for XLR to DAC PRO) are 8 mm female/male. * PCB standoffs (for the official Raspberry Pi 7-inch display) are 5 mm male/female. * The rotary encoders we have used and tested are the Alpha three-pin rotary encoder RE160F-40E3-20A-24P, the ALPS EC12E2430804 (RS: 729-5848), and the Bourns ECW0JB24-AC0006L (RS: 263-2839). * The barrel connector used for powering the DigiAMP{plus} is 2.5 mm ID, 5.5 mm OD, 11 mm. * The DigiAMP{plus} is designed to operate with a 12V to 24V, 3A supply such as the XPPower VEC65US19 or similar. * The DigiAMP{plus} uses CamdenBoss two-part connectors. Those fitted to the PCB are CTBP9350/2AO. * The speaker terminal used on the Codec Zero will accept wires of between 14~26 AWG (wire of max 1.6 mm in diameter). === GPIO usage Raspberry Pi audio boards take advantage of a number of pins on the GPIO header in order to operate successfully. Some of these pins are solely for the use of the board, and some can be shared with other peripherals, sensors, etc. The following Raspberry Pi GPIO pins will be used by the audio boards: * All power pins * All ground pins * GPIO 2/3 (I2C) * GPIO 18/19/20/21 (I2S) If appropriate then the following are also used: * GPIO 22 (DigiAMP+ mute/unmute support) * GPIO 23/24 for rotary encoder (physical volume control) or status LED (Codec Zero) * GPIO 25 for the IR Sensor * GPIO 27 for the rotary encoder push switch/Codec Zero switch === DAC PRO, DAC{plus}, DigiAMP{plus}, Codec Zero image::images/all_audio_boards_gpio_pinouts.png[width="80%"] The DAC PRO, DAC{plus} and DigiAMP{plus} re-expose the Raspberry Pi signals, allowing additional sensors and peripherals to be added easily. Please note that some signals are for exclusive use (I2S and EEPROM) by some of our boards; others such as I2C can be shared across multiple boards. image::images/pin_out_new.png[width="80%"] === Saving AlsaMixer settings To store the AlsaMixer settings, add the following at the command line: [source,console] ---- $ sudo alsactl store ---- You can save the current state to a file, then reload that state at startup. To save, run the following command, replacing `` with your username: [source,console] ---- $ sudo alsactl store -f /home//usecase.state ---- To restore a saved file, run the following command, replacing `` with your username: [source,console] ---- $ sudo alsactl restore -f /home//usecase.state ---- === MPD-based audio with volume control To allow Music Player Daemon (MPD)-based audio software to control the audio board's built in volume, the file `/etc/mpd.conf` may need to be changed to support the correct AlsaMixer name. This can be achieved by ensuring the 'Audio output' section of `/etc/mpd.conf` has the 'mixer_control' line. Below is an example for the Texas Instruments-based boards (DAC PRO/DAC{plus}/DigiAMP{plus}): ---- audio_output { type "alsa" name "ALSA Device" mixer_control "Digital" } ---- --- # Source: introduction.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Overview Raspberry Pi Audio Boards bring high quality audio to your existing hi-fi or Raspberry Pi-based equipment and projects. We offer four different Hardware Attached on Top (HAT) options that will fit any Raspberry Pi using the 40-pin GPIO header. Each board has a specific purpose and set of features. The highest audio quality playback is available from our DAC PRO, DAC{plus} and DigiAMP{plus} boards, which support up to full HD audio (192 kHz); while the Codec Zero supports up to HD audio (96 kHz) and includes a built-in microphone, making it ideal for compact projects. === Features at a glance [cols="2,1,1,1,1,1,1,1,1,1"] |=== | | *Line out* | *Balanced out* | *Stereo speakers* | *Mono speaker* | *Headphones* | *Aux in* | *Aux out* | *Ext mic* | *Built-in mic* | DAC Pro ^| ✓ ^| ✓ | | ^| ✓ | | | | | DAC{plus} ^| ✓ | | | ^| ✓ | | | | | DigiAmp{plus} | | ^| ✓ | | | | | | | Codec Zero | | | ^| ✓ | ^| ✓ ^| ✓ ^| ✓ ^| ✓ |=== Line out:: A double phono/RCA connector, normally red and white in colour. This output is a variable analogue signal (0-2V RMS) and can connect to your existing hi-fi (pre-amp or amplifier), or can be used to drive active speakers which have their own amplifier built in. Balanced out:: An XLR connector, normally a three-pin male connector. This is used in a studio set-up, and in some high-end hi-fi systems. It can also be used to drive active monitor speakers like those used at clubs or on stage directed towards the DJ or performers. Stereo speakers:: Two sets of screw terminals for 2 × 25 W speakers. These are for traditional hi-fi speakers without built-in amplification. These are known as passive speakers. Mono speaker:: A screw terminal for a single 1.2 W speaker, as found in a transistor radio or similar. Headphones:: A 3.5 mm jack socket delivering stereo audio for a set of headphones. The headphone amplifiers on the Raspberry Pi DAC boards can drive up to 80/90 Ω impedance headphones. Aux in:: A double Phono/RCA connector or 3.5 mm socket. Accepts analogue audio in up to 1V RMS. This can be used to record audio from a variable analogue source such as a mobile phone, MP3 player or similar. Aux out:: A double Phono/RCA connector or 3.5 mm socket. Delivers analogue audio out up to 1V RMS. This can be used to feed audio into an amplifier at a reduced volume compared to Line out. Ext mic:: A 3.5 mm socket for use with an external electret microphone. The built-in MEMS microphone on the Codec Zero is automatically disabled when the external Mic in connector is used. --- # Source: update-firmware.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Updating your firmware Raspberry Pi Audio Boards use an EEPROM that contains information that is used by the host Raspberry Pi device to select the appropriate driver at boot time. This information is programmed into the EEPROM during manufacture. There are some circumstances where the end user may wish to update the EEPROM contents: this can be done from the command line. IMPORTANT: Before proceeding, update the version of Raspberry Pi OS running on your Raspberry Pi to the latest version. === The EEPROM write-protect link During the programming process you will need to connect the two pads shown in the red box with a wire to pull down the EEPROM write-protect link. image::images/write_protect_tabs.jpg[width="80%"] NOTE: In some cases the two pads may already have a 0 Ω resistor fitted to bridge the write-protect link, as illustrated in the picture of the Codec Zero board above. === Program the EEPROM Once the write-protect line has been pulled down, the EEPROM can be programmed. You should first install the utilities and then run the programmer. Open up a terminal window and type the following: [source,console] ---- $ sudo apt update $ sudo apt install rpi-audio-utils $ sudo rpi-audio-flash ---- After starting, you will see a warning screen. image::images/firmware-update/warning.png[] Select "Yes" to proceed. You should see a menu where you can select your hardware. image::images/firmware-update/select.png[] NOTE: If no HAT is present, or if the connected HAT is not a Raspberry Pi Audio board, you will be presented with an error screen. If the firmware has already been updated on the board, a message will be displayed informing you that you do not have to continue. After selecting the hardware, a screen will display while the new firmware is flashed to the HAT. image::images/firmware-update/flashing.png[] Afterwards a screen will display telling you that the new firmware has installed. image::images/firmware-update/flashed.png[] NOTE: If the firmware fails to install correctly, you will see an error screen. Try removing and reseating the HAT, then flash the firmware again. --- # Source: audio.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::audio/introduction.adoc[] include::audio/dac_pro.adoc[] include::audio/dac_plus.adoc[] include::audio/digiamp_plus.adoc[] include::audio/codec_zero.adoc[] include::audio/configuration.adoc[] include::audio/getting_started.adoc[] include::audio/hardware-info.adoc[] include::audio/update-firmware.adoc[] --- # Source: compat.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Device Compatibility The Build HAT library supports all the LEGO® Technic™ devices included in the SPIKE™ Portfolio, along with those from the LEGO® Mindstorms Robot Inventor kit and other devices that use a PoweredUp connector. IMPORTANT: The product code for the SPIKE™ Prime Expansion Set that includes the Maker Plate is 45681. The original Expansion Set is 45680 and does not include the Maker Plate. [cols="2,2,1,1,1,1,1,3,1,1,1,1", width="100%", options="header"] |=== | Description | Colour | LEGO Item Number | Supported in FW | Supported in Python | Alt Number | BrickLink | Available In | Set Numbers | Class | Type | Device ID | Large Angular Motor | White/Cyan | 45602| Yes | Yes | 45602 | https://www.bricklink.com/v2/catalog/catalogitem.page?S=45602-1#T=S&O={%22iconly%22:0}[Link] | SPIKE Prime Set, SPIKE Prime Expansion Set | 45678, 45680 | Motor | Active | 31 | Medium Angular Motor | White/Cyan | 45603 | Yes | Yes | 45603 | https://www.bricklink.com/v2/catalog/catalogitem.page?S=45603-1#T=S&O={%22iconly%22:0}[Link] | SPIKE Prime Set | 45678 | Motor | Active | 30 | Medium Angular Motor | White/Grey | 6299646, 6359216, 6386708 | Yes | Yes | 436655 | https://www.bricklink.com/v2/catalog/catalogitem.page?P=54696c01&idColor=86#T=C&C=86[Link] | Mindstorms Robot Inventor | 51515 | Motor | Active | 4B | Small Angular Motor | White/Cyan | 45607, 6296520 | Yes| Yes| | https://www.bricklink.com/v2/catalog/catalogitem.page?P=45607c01[Link] | SPIKE Essentials Set| | Motor| Active| 41 | Light/Colour sensor |White/Black | 6217705 |Yes | Yes | | https://www.bricklink.com/v2/catalog/catalogitem.page?P=37308c01&idColor=11#T=C&C=11[Link] | SPIKE Prime Set, SPIKE Prime Expansion Set, Mindstorms Robot Inventor, SPIKE Essentials | 45678, 45680, 51515 | ColorSensor |Active | 3D | Distance Sensor | White/Black | 6302968 | Yes | Yes | | https://www.bricklink.com/v2/catalog/catalogitem.page?P=37316c01&idColor=11#T=C&C=11[Link] | SPIKE Prime Set, Mindstorms Robot Inventor | 45678, 51515 |DistanceSensor | Active | 3E | System medium motor | White/Grey | 45303, 6138854, 6290182, 6127110 | Yes | Yes | | | Wedo 2.0, LEGO Ideas Piano, App controlled Batmobile | 76112 | | Passive | 1 | Force Sensor | White/Black | 6254354 | Yes | Yes | 45606 | https://www.bricklink.com/v2/catalog/catalogitem.page?P=37312c01&idColor=11#T=C&C=11[Link] | SPIKE Prime Set | 45678 | ForceSensor | Active | 3F | 3×3 LED | White/Cyan | 45608, 6297023 | Yes | Yes | | https://www.bricklink.com/v2/catalog/catalogitem.page?P=45608c01[Link] | SPIKE Essentials | | Matrix | Active | 40 | System train motor | Black | 88011 | Yes | Yes | 28740, 88011-1 | https://www.bricklink.com/v2/catalog/catalogitem.page?S=88011-1#T=S&O={%22iconly%22:0}[Link] | Cargo Train, Disney Train and Station, Passenger Train| | | Passive | 2 | PoweredUp LED lights | Black | 88005 | Yes | | | https://www.bricklink.com/v2/catalog/catalogitem.page?S=88005-1#T=S&O={%22iconly%22:0}[Link] | | | | Passive | 8 | Medium linear motor | White/Grey | 88008 | Yes | Yes | 26913, 88008-1 | https://www.bricklink.com/v2/catalog/catalogitem.page?S=88008-1#T=S&O={%22iconly%22:0}[Link] | Boost, Droid Commander| | Motor | Active | 26 | Technic large motor | Grey/Grey | 88013 | Yes | Yes | 22169 | https://www.bricklink.com/v2/catalog/catalogitem.page?S=88013-1#T=S&O={%22iconly%22:0}[Link] | | | | Active | 2E | Technic XL motor | Grey/Grey | 88014 | Yes | Yes | 22172, 88014 | https://www.bricklink.com/v2/catalog/catalogitem.page?S=88014-1#T=S&O={%22iconly%22:0}[Link] | | | | Active | 2F | Colour + distance sensor | White/Grey | 88007 | Partial | ? | 26912 | https://www.bricklink.com/v2/catalog/catalogitem.page?S=88007-1#T=S&O={%22iconly%22:0}[Link] | | | | Active | 25 | WeDo 2.0 Motion sensor | White/Grey | 45304, 6138855 | | | 5003423-1| https://www.bricklink.com/v2/catalog/catalogitem.page?S=9583-1#T=S&O={%22iconly%22:0}}[Link] | | | | Active | 35 | WeDo 2.0 Tilt sensor | White/Grey | 45305, 6138856 | | | 5003423-1 | https://www.bricklink.com/v2/catalog/catalogitem.page?S=9584-1#T=S&O={%22iconly%22:0}[Link] | | | | Active | 34 |=== --- # Source: introduction.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[about-build-hat]] == About The https://raspberrypi.com/products/build-hat[Raspberry Pi Build HAT] is an add-on board that connects to the 40-pin GPIO header of your Raspberry Pi, which was designed in collaboration with LEGO® Education to make it easy to control LEGO® Technic™ motors and sensors with Raspberry Pi computers. WARNING: The Raspberry Pi Build HAT is not yet supported by Raspberry Pi OS _Trixie_. To use the Build HAT, install or stay on Raspberry Pi OS _Bookworm_ for now. image::images/build-hat.jpg[width="80%"] NOTE: A full list of supported devices can be found in the xref:build-hat.adoc#device-compatibility[Device Compatibility] section. It provides four connectors for LEGO® Technic™ motors and sensors from the SPIKE™ Portfolio. The available sensors include a distance sensor, a colour sensor, and a versatile force sensor. The angular motors come in a range of sizes and include integrated encoders that can be queried to find their position. The Build HAT fits all Raspberry Pi computers with a 40-pin GPIO header, including, with the addition of a ribbon cable or other extension device, Keyboard-series devices. Connected LEGO® Technic™ devices can easily be controlled in Python, alongside standard Raspberry Pi accessories such as a camera module. The Raspberry Pi Build HAT power supply (PSU), which is https://raspberrypi.com/products/build-hat-power-supply[available separately], is designed to power both the Build HAT and Raspberry Pi computer along with all connected LEGO® Technic™ devices. image::images/psu.jpg[width="80%"] The LEGO® Education SPIKE™ Prime Set 45678 and SPIKE™ Prime Expansion Set 45681, available separately from LEGO® Education resellers, include a collection of useful elements supported by the Build HAT. NOTE: The HAT works with all 40-pin GPIO Raspberry Pi boards, including Zero-series devices. With the addition of a ribbon cable or other extension device, it can also be used with Keyboard-series devices. * Controls up to 4 LEGO® Technic™ motors and sensors included in the SPIKE™ Portfolio * Easy-to-use https://buildhat.readthedocs.io/[Python library] to control your LEGO® Technic™ devices * Fits onto any Raspberry Pi computer with a 40-pin GPIO header * Onboard xref:../microcontrollers/silicon.adoc[RP2040] microcontroller manages low-level control of LEGO® Technic™ devices * External 8V PSU https://raspberrypi.com/products/build-hat-power-supply[available separately] to power both Build HAT and Raspberry Pi [NOTE] ==== The Build HAT cannot power Keyboard-series devices, since they do not support power supply over the GPIO headers. ==== --- # Source: links-to-other.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Further Resources You can download documentation on the, * https://datasheets.raspberrypi.com/build-hat/build-hat-serial-protocol.pdf[Raspberry Pi Build HAT Serial Protocol] * https://datasheets.raspberrypi.com/build-hat/build-hat-python-library.pdf[Raspberry Pi Build HAT Python Library] and full details of the Python Library documentation can also be found https://buildhat.readthedocs.io/[on ReadTheDocs]. You can find more information on the .NET library in the https://github.com/dotnet/iot/tree/main/src/devices/BuildHat[.NET IoT] Github repository. You can also follow along with projects from the Raspberry Pi Foundation, * https://projects.raspberrypi.org/en/projects/lego-game-controller[LEGO® Game Controller] * https://projects.raspberrypi.org/en/projects/lego-robot-car[LEGO® Robot Car] * https://projects.raspberrypi.org/en/projects/lego-plotter[LEGO® Plotter] * https://projects.raspberrypi.org/en/projects/lego-robot-face[LEGO® Robot Face] * https://projects.raspberrypi.org/en/projects/lego-data-dash[LEGO® Data Dash] --- # Source: mech.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Mechanical Drawings Mechanical drawing of the Raspberry Pi Build HAT. image::images/mech-build-hat.png[width="80%"] --- # Source: net-brick.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Use the Build HAT from .NET The Raspberry Pi Built HAT is referred to "Brick" in LEGO® parlance and you can talk directly to it from .NET using the https://datasheets.raspberrypi.com/build-hat/build-hat-serial-protocol.pdf[Build HAT Serial Protocol]. You can create a `brick` object as below, [source,csharp] ---- Brick brick = new("/dev/serial0"); ---- but you need to remember to dispose of the `brick` at the end of your code. [source,csharp] ---- brick.Dispose(); ---- WARNING: If you don't call `brick.Dispose()` your program will not terminate. If you want to avoid calling `brick.Dispose` at the end, then create your brick with the `using` statement: [source,csharp] ---- using Brick brick = new("/dev/serial0"); ---- In this case, when reaching the end of the program, your brick will be automatically disposed. ==== Display Build HAT information You can gather the various software versions, the signature, and the input voltage: [source,csharp] ---- var info = brick.BuildHatInformation; Console.WriteLine($"version: {info.Version}, firmware date: {info.FirmwareDate}, signature:"); Console.WriteLine($"{BitConverter.ToString(info.Signature)}"); Console.WriteLine($"Vin = {brick.InputVoltage.Volts} V"); ---- NOTE: The input voltage is read only once at boot time and is not read again afterwards. ==== Getting sensors and motors details The functions `GetSensorType`, `GetSensor` will allow you to retrieve any information on connected sensor. [source,csharp] ---- SensorType sensor = brick.GetSensorType((SensorPort)i); Console.Write($"Port: {i} {(Brick.IsMotor(sensor) ? "Sensor" : "Motor")} type: {sensor} Connected: "); ---- In this example, you can as well use the `IsMotor` static function to check if the connected element is a sensor or a motor. [source,csharp] ---- if (Brick.IsActiveSensor(sensor)) { ActiveSensor activeSensor = brick.GetActiveSensor((SensorPort)i); } else { var passive = (Sensor)brick.GetSensor((SensorPort)i); Console.WriteLine(passive.IsConnected); } ---- `ActiveSensor` have a collection of advanced properties and functions allowing to understand every element of the sensor. It is also possible to call the primitive functions from the brick from them. This will allow you to select specific modes and do advance scenarios. While this is possible, motor and sensor classes have been created to make your life easier. ==== Events Most sensors implements events on their special properties. You can simply subscribe to `PropertyChanged` and `PropertyUpdated`. The changed one will be fired when the value is changing while the updated one when there is a success update to the property. Depending on the modes used, some properties may be updated in the background all the time while some others occasionally. You may be interested only when a colour is changing or the position of the motor is changing, using it as a tachometer. In this case, the `PropertyChanged` is what you need! [source,csharp] ---- Console.WriteLine("Move motor on Port A to more than position 100 to stop this test."); brick.WaitForSensorToConnect(SensorPort.PortA); var active = (ActiveMotor)brick.GetMotor(SensorPort.PortA); bool continueToRun = true; active.PropertyChanged += MotorPropertyEvent; while (continueToRun) { Thread.Sleep(50); } active.PropertyChanged -= MotorPropertyEvent; Console.WriteLine($"Current position: {active.Position}, eventing stopped."); void MotorPropertyEvent(object? sender, PropertyChangedEventArgs e) { Console.WriteLine($"Property changed: {e.PropertyName}"); if (e.PropertyName == nameof(ActiveMotor.Position)) { if (((ActiveMotor)brick.GetMotor(SensorPort.PortA)).Position > 100) { continueToRun = false; } } } ---- ==== Wait for initialisation The brick can take a long time before it initialises. A wait for a sensor to be connected has been implemented. [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortB); ---- It does also take a `CancellationToken` if you want to implement advance features like warning the user after some time and retrying. --- # Source: net-installing-software.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Use the Build HAT from .NET === Install the .NET Framework The .NET framework from Microsoft is not available via `apt` on Raspberry Pi. However, you can follow the https://docs.microsoft.com/en-us/dotnet/iot/deployment[official instructions] from Microsoft to install the .NET framework. Alternatively, there is a simplified https://www.petecodes.co.uk/install-and-use-microsoft-dot-net-5-with-the-raspberry-pi/[third party route] to get the .NET toolchain on to your Raspberry Pi. WARNING: The installation script is run as `root`. You should read it first and make sure you understand what it is doing. If you are at all unsure you should follow the https://docs.microsoft.com/en-us/dotnet/iot/deployment[official instructions] manually. [source,console] ---- $ wget -O - https://raw.githubusercontent.com/pjgpetecodes/dotnet5pi/master/install.sh | sudo bash ---- After installing the .NET framework you can create your project: [source,console] ---- $ dotnet new console --name buildhat ---- This creates a default program in the `buildhat` subdirectory, and we need to be in that directory in order to continue: [source,console] ---- $ cd buildhat ---- You will now need to install the following nuget packages: [source,console] ---- $ dotnet add package System.Device.Gpio --version 2.1.0 $ dotnet add package Iot.Device.Bindings --version 2.1.0 ---- === Run C# Code You can run the program with the `dotnet run` command. Let's try it now to make sure everything works. It should print "Hello World!" [source,console] ---- $ dotnet run Hello World! ---- (When instructed to "run the program" in the instructions that follow, you will simply rerun `dotnet run`) === Edit C# Code In the instructions below, you will be editing the file `buildhat/Program.cs`, the C# program which was generated when you ran the above commands. Any text editor will work to edit C# code, including Geany, the IDE/Text Editor that comes pre-installed. https://code.visualstudio.com/docs/setup/raspberry-pi/[Visual Studio Code] (often called "VS Code") is also a popular alternative. --- # Source: net-motors.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Use Motors from .NET There are two types of motors, the *passive* ones and the *active* ones. Active motors will provide detailed position, absolute position and speed while passive motors can only be controlled with speed. A common set of functions to control the speed of the motors are available. There are 2 important ones: `SetPowerLimit` and `SetBias`: [source,csharp] ---- train.SetPowerLimit(1.0); train.SetBias(0.2); ---- The accepted values are only from 0.0 to 1.0. The power limit is a convenient ay to reduce in proportion the maximum power. The bias value sets for the current port which is added to positive motor drive values and subtracted from negative motor drive values. This can be used to compensate for the fact that most DC motors require a certain amount of drive before they will turn at all. The default values when a motor is created is 0.7 for the power limit and 0.3 for the bias. ==== Passive Motors .Train motor, https://www.bricklink.com/v2/catalog/catalogitem.page?S=88011-1&name=Train%20Motor&category=%5BPower%20Functions%5D%5BPowered%20Up%5D#T=S&O={%22iconly%22:0}[Image from Bricklink] image::images/train-motor.png[Train motor,width="60%"] The typical passive motor is a train and older Powered Up motors. The `Speed` property can be set and read. It is the target and the measured speed at the same time as those sensors do not have a way to measure them. The value is from -100 to +100. Functions to control `Start`, `Stop` and `SetSpeed` are also available. Here is an example of how to use it: [source,csharp] ---- Console.WriteLine("This will run the motor for 20 seconds incrementing the PWM"); train.SetPowerLimit(1.0); train.Start(); for (int i = 0; i < 100; i++) { train.SetSpeed(i); Thread.Sleep(250); } Console.WriteLine("Stop the train for 2 seconds"); train.Stop(); Thread.Sleep(2000); Console.WriteLine("Full speed backward for 2 seconds"); train.Start(-100); Thread.Sleep(2000); Console.WriteLine("Full speed forward for 2 seconds"); train.Start(100); Thread.Sleep(2000); Console.WriteLine("Stop the train"); train.Stop(); ---- NOTE: Once the train is started, you can adjust the speed and the motor will adjust accordingly. ==== Active Motors .Active motor, https://www.bricklink.com/v2/catalog/catalogitem.page?S=88014-1&name=Technic%20XL%20Motor&category=%5BPower%20Functions%5D%5BPowered%20Up%5D#T=S&O={%22iconly%22:0}[Image from Bricklink] image::images/active-motor.png[Active motor,width="60%"] Active motors have `Speed`, `AbsolutePosition`, `Position` and `TargetSpeed` as special properties. They are read continuously even when the motor is stopped. The code snippet shows how to get the motors, start them and read the properties: [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortA); brick.WaitForSensorToConnect(SensorPort.PortD); var active = (ActiveMotor)brick.GetMotor(SensorPort.PortA); var active2 = (ActiveMotor)brick.GetMotor(SensorPort.PortD); active.Start(50); active2.Start(50); // Make sure you have an active motor plug in the port A and D while (!Console.KeyAvailable) { Console.CursorTop = 1; Console.CursorLeft = 0; Console.WriteLine($"Absolute: {active.AbsolutePosition} "); Console.WriteLine($"Position: {active.Position} "); Console.WriteLine($"Speed: {active.Speed} "); Console.WriteLine(); Console.WriteLine($"Absolute: {active2.AbsolutePosition} "); Console.WriteLine($"Position: {active2.Position} "); Console.WriteLine($"Speed: {active2.Speed} "); } active.Stop(); active2.Stop(); ---- NOTE: Don't forget to start and stop your motors when needed. Advance features are available for active motors. You can request to move for seconds, to a specific position, a specific absolute position. Here are couple of examples: [source,csharp] ---- // From the previous example, this will turn the motors back to their initial position: active.TargetSpeed = 100; active2.TargetSpeed = 100; // First this motor and will block the thread active.MoveToPosition(0, true); // Then this one and will also block the thread active2.MoveToPosition(0, true); ---- Each function allow you to block or not the thread for the time the operation will be performed. Note that for absolute and relative position moves, there is a tolerance of few degrees. [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortA); var active = (ActiveMotor)brick.GetMotor(SensorPort.PortA); active.TargetSpeed = 70; Console.WriteLine("Moving motor to position 0"); active.MoveToPosition(0, true); Console.WriteLine("Moving motor to position 3600 (10 turns)"); active.MoveToPosition(3600, true); Console.WriteLine("Moving motor to position -3600 (so 20 turns the other way"); active.MoveToPosition(-3600, true); Console.WriteLine("Moving motor to absolute position 0, should rotate by 90°"); active.MoveToAbsolutePosition(0, PositionWay.Shortest, true); Console.WriteLine("Moving motor to position 90"); active.MoveToAbsolutePosition(90, PositionWay.Shortest, true); Console.WriteLine("Moving motor to position 179"); active.MoveToAbsolutePosition(179, PositionWay.Shortest, true); Console.WriteLine("Moving motor to position -180"); active.MoveToAbsolutePosition(-180, PositionWay.Shortest, true); active.Float(); ---- You can place the motor in a float position, meaning, there are no more constrains on it. This is a mode that you can use when using the motor as a tachometer, moving it and reading the position. If you still have constrains on the motors, you may not be able to move it. --- # Source: net-sensors.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Use Sensors from .NET Like for motors, you have active and passive sensors. Most recent sensors are active. The passive one are lights and simple buttons. Active ones are distance or colour sensors, as well as small 3×3 pixel displays. ==== Button/Touch Passive Sensor The button/touch passive sensor have one specific property `IsPressed`. The property is set to true when the button is pressed. Here is a complete example with events: [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortA); var button = (ButtonSensor)brick.GetSensor(SensorPort.PortA); bool continueToRun = true; button.PropertyChanged += ButtonPropertyEvent; while (continueToRun) { // You can do many other things here Thread.Sleep(50); } button.PropertyChanged -= ButtonPropertyEvent; Console.WriteLine($"Button has been pressed, we're stopping the program."); brick.Dispose(); void ButtonPropertyEvent(object? sender, PropertyChangedEventArgs e) { Console.WriteLine($"Property changed: {e.PropertyName}"); if (e.PropertyName == nameof(ButtonSensor.IsPressed)) { continueToRun = false; } } ---- ==== Passive Light .Passive light, https://www.bricklink.com/v2/catalog/catalogitem.page?P=22168c01&name=Electric,%20Light%20Unit%20Powered%20Up%20Attachment&category=%5BElectric,%20Light%20&%20Sound%5D#T=C&C=11[Image from Bricklink] image::images/passive-light.png[Passive light, width="60%"] The passive light are the train lights. They can be switched on and you can controlled their brightness. [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortA); var light = (PassiveLight)brick.GetSensor(SensorPort.PortA); // Brightness 50% light.On(50); Thread.Sleep(2000); // 70% Brightness light.Brightness = 70; Thread.Sleep(2000); // Switch light off light.Off() ---- ==== Active Sensor The active sensor class is a generic one that all the active sensor inherit including active motors. They contains a set of properties regarding how they are connected to the Build HAT, the modes, the detailed Combi modes, the hardware, software versions and a specific property called `ValueAsString`. The value as string contains the last measurement as a collection of strings. A measurement arrives like `P0C0: +23 -42 0`, the enumeration will contains `P0C0:`, `+23`, `-42` and `0`. This is made so if you are using advance modes and managing yourself the Combi modes and commands, you'll be able to get the measurements. All active sensor can run a specific measurement mode or a Combi mode. You can setup one through the advance mode using the `SelectModeAndRead` and `SelectCombiModesAndRead` functions with the specific mode(s) you'd like to continuously have. It is important to understand that changing the mode or setting up a new mode will stop the previous mode. The modes that can be combined in the Combi mode are listed in the `CombiModes` property. Al the properties of the sensors will be updated automatically when you'll setup one of those modes. ==== WeDo Tilt Sensor .WeDo Tilt sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?S=45305-1&name=WeDo%202.0%20Tilt%20Sensor&category=%5BEducational%20&%20Dacta%5D%5BWeDo%5D#T=S&O={%22iconly%22:0}[Image from Bricklink] image::images/wedo-tilt.png[WeDo Tilt sensor, width="60%"] WeDo Tilt Sensor has a special `Tilt` property. The type is a point with X is the X tilt and Y is the Y tilt. The values goes from -45 to + 45, they are caped to those values and represent degrees. You can set a continuous measurement for this sensor using the `ContinuousMeasurement` property. [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortA); var tilt = (WeDoTiltSensor)brick.GetSensor(SensorPort.PortA); tilt.ContinuousMeasurement = true; Point tiltValue; while(!console.KeyAvailable) { tiltValue = tilt.Tilt; console.WriteLine($"Tilt X: {tiltValue.X}, Tilt Y: {tiltValue.Y}"); Thread.Sleep(200); } ---- ==== WeDoDistance Sensor .WeDo Distance sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?S=45304-1&name=WeDo%202.0%20Motion%20Sensor&category=%5BEducational%20&%20Dacta%5D%5BWeDo%5D#T=S&O={%22iconly%22:0}[Image from Bricklink] image::images/wedo-distance.png[WeDo Distance sensor, width="60%"] WeDo Distance Sensor gives you a distance in millimetres with the Distance property. [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortA); var distance = (WeDoDistanceSensor)brick.GetSensor(SensorPort.PortA); distance.ContinuousMeasurement = true; while(!console.KeyAvailable) { console.WriteLine($"Distance: {distance.Distance} mm"); Thread.Sleep(200); } ---- ==== SPIKE Prime Force Sensor .Spike Force Sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?P=37312c01&name=Electric%20Sensor,%20Force%20-%20Spike%20Prime&category=%5BElectric%5D#T=C&C=11[Image from Bricklink] image::images/spike-force.png[spike force sensor, width="60%"] This force sensor measure the pressure applies on it and if it is pressed. The two properties can be access through `Force` and `IsPressed` properties. [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortA); var force = (ForceSensor)brick.GetSensor(SensorPort.PortA); force.ContinuousMeasurement = true; while(!force.IsPressed) { console.WriteLine($"Force: {force.Force} N"); Thread.Sleep(200); } ---- ==== SPIKE Essential 3×3 Colour Light Matrix .spike 3×3 matrix, https://www.bricklink.com/v2/catalog/catalogitem.page?P=45608c01&name=Electric,%203%20x%203%20Color%20Light%20Matrix%20-%20SPIKE%20Prime&category=%5BElectric%5D#T=C[Image from Bricklink] image::images/3x3matrix.png[spike 3×3 matrix, width="60%"] This is a small 3×3 display with 9 different LEDs that can be controlled individually. The class exposes functions to be able to control the screen. Here is an example using them: [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortA); var matrix = (ColorLightMatrix)brick.GetSensor(SensorPort.PortA); for(byte i = 0; i < 10; i++) { // Will light every led one after the other like a progress bar matrix.DisplayProgressBar(i); Thread.Sleep(1000); } for(byte i = 0; i < 11; i++) { // Will display the matrix with the same color and go through all of them matrix.DisplayColor((LedColor)i); Thread.Sleep(1000); } Span brg = stackalloc byte[9] { 1, 2, 3, 4, 5, 6, 7, 8, 9 }; Span col = stackalloc LedColor[9] { LedColor.White, LedColor.White, LedColor.White, LedColor.White, LedColor.White, LedColor.White, LedColor.White, LedColor.White, LedColor.White }; // Shades of grey matrix.DisplayColorPerPixel(brg, col); ---- ==== SPIKE Prime Colour Sensor and Colour and Distance Sensor SPIKE colour sensor: .spike colour sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?P=37308c01&name=Electric%20Sensor,%20Color%20-%20Spike%20Prime&category=%5BElectric%5D#T=C&C=11[Image from Bricklink] image::images/spike-color.png[spike color sensor, width="60%"] Colour and distance sensor: .Color distance sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?P=bb0891c01&name=Electric%20Sensor,%20Color%20and%20Distance%20-%20Boost&category=%5BElectric%5D#T=C&C=1[Image from Bricklink] image::images/color-distance.png[Colour distance sensor, width="60%"] Those colour sensor has multiple properties and functions. You can get the `Color`, the `ReflectedLight` and the `AmbiantLight`. On top of this, the Colour and Distance sensor can measure the `Distance` and has an object `Counter`. It will count automatically the number of objects which will go in and out of the range. This does allow to count objects passing in front of the sensor. The distance is limited from 0 to 10 centimetres. [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortC); var colorSensor = (ColorAndDistanceSensor)brick.GetActiveSensor(SensorPort.PortC); while (!Console.KeyAvailable) { var colorRead = colorSensor.GetColor(); Console.WriteLine($"Color: {colorRead}"); var reflected = colorSensor.GetReflectedLight(); Console.WriteLine($"Reflected: {reflected}"); var ambiant = colorSensor.GetAmbiantLight(); Console.WriteLine($"Ambiant: {ambiant}"); var distance = colorSensor.GetDistance(); Console.WriteLine($"Distance: {distance}"); var counter = colorSensor.GetCounter(); Console.WriteLine($"Counter: {counter}"); Thread.Sleep(200); } ---- NOTE: For better measurement, it is not recommended to change the measurement mode in a very fast way, the colour integration may not be done in a proper way. This example gives you the full spectrum of what you can do with the sensor. Also, this class do not implement a continuous measurement mode. You can setup one through the advance mode using the `SelectModeAndRead` function with the specific mode you'd like to continuously have. It is important to understand that changing the mode or setting up a new mode will stop the previous mode. ==== SPIKE Prime Ultrasonic Distance Sensor .Spike distance sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?P=37316c01&name=Electric%20Sensor,%20Distance%20-%20Spike%20Prime&category=%5BElectric%5D#T=C&C=11[Image from Bricklink] image::images/spike-distance.png[Spike distance sensor, width="60%"] This is a distance sensor and it does implement a `Distance` property that will give the distance in millimetre. A `ContinuousMeasurement` mode is also available on this one. [source,csharp] ---- brick.WaitForSensorToConnect(SensorPort.PortA); var distance = (UltrasonicDistanceSensor)brick.GetSensor(SensorPort.PortA); distance.ContinuousMeasurement = true; while(!console.KeyAvailable) { console.WriteLine($"Distance: {distance.Distance} mm"); Thread.Sleep(200); } ---- --- # Source: preparing-build-hat.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Prepare your Build HAT NOTE: Before starting to work with your Raspberry Pi Build HAT you should xref:../computers/getting-started.adoc#setting-up-your-raspberry-pi[set up] your Raspberry Pi, xref:../computers/getting-started.adoc#installing-the-operating-system[install] the latest version of the operating system using https://www.raspberrypi.com/downloads/[Raspberry Pi Imager]. Attach 9 mm spacers to the bottom of the board. Seat the Raspberry Pi Build HAT onto your Raspberry Pi. Make sure you put it on the right way up. Unlike other HATs, all the components are on the bottom, leaving room for a breadboard or LEGO® elements on top. video::images/fitting-build-hat.webm[width="80%"] === Access the GPIO Pins If you want to access the GPIO pins of the Raspberry Pi, you can add an optional tall header and use 15 mm spacers. image::images/tall-headers.jpg[width="80%"] The following pins are used by the Build HAT itself and you should not connect anything to them. [[table_passive_ids]] [cols="^1,^1,^1", width="75%", options="header"] |=== | GPIO| Use | Status | GPIO0/1 | ID prom | | GPIO4| Reset | | GPIO14| Tx | | GPIO15| Rx | | GPIO16 | RTS | unused | GPIO17 | CTS | unused |=== === Set up your Raspberry Pi Once the Raspberry Pi has booted, open the Control Centre tool by selecting the Raspberry Menu button and then selecting **Preferences > Control Centre**. Select the **Interfaces** tab and adjust the serial settings as shown in the following image: image::images/setting-up.png["The Interfaces tab. SSH, VNC, and Serial Port are enabled. The rest of the options are not enabled.", width="50%"] ==== Use your Raspberry Pi headless If you are running your Raspberry Pi headless and using `raspi-config`, select "Interface Options" from the first menu. image::images/raspi-config-1.png[width="70%"] Then "P6 Serial Port". image::images/raspi-config-2.png[width="70%"] Disable the serial console: image::images/raspi-config-3.png[width="70%"] And enable the serial port hardware. image::images/raspi-config-4.png[width="70%"] The final settings should look like this. image::images/raspi-config-5.png[width="70%"] You will need to reboot at this point if you have made any changes. === Power the Build HAT Connect an external power supply — the https://raspberrypi.com/products/build-hat-power-supply[official Raspberry Pi Build HAT power supply] is recommended — however any reliable +8V±10% power supply capable of supplying 48 W via a DC 5521 centre positive barrel connector (5.5 mm × 2.1 mm × 11 mm) will power the Build HAT. You don't need to connect an additional USB power supply to the Raspberry Pi unless you are using a Keyboard-series device. [NOTE] ==== The Build HAT cannot power Keyboard-series devices, since they do not support power supply over the GPIO headers. ==== video::images/powering-build-hat.webm[width="80%"] [NOTE] ==== The LEGO® Technic™ motors are very powerful; so to drive them you'll need an external 8V power supply. If you want to read from motor encoders and the SPIKE™ force sensor, you can power your Raspberry Pi and Build HAT the usual way, via your Raspberry Pi's USB power socket. The SPIKE™ colour and distance sensors, like the motors, require an https://raspberrypi.com/products/build-hat-power-supply[external power supply]. ==== You have the choice to use Build HAT with Python or .NET. --- # Source: py-installing-software.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Use the Build HAT from Python === Install the Build HAT Python Library To install the Build HAT Python library, open a terminal window and run the following command: [source,console] ---- $ sudo apt install python3-build-hat ---- Raspberry Pi OS versions prior to _Bookworm_ do not have access to the library with `apt`. Instead, run the following command to install the library using `pip`: [source,console] ---- $ sudo pip3 install buildhat ---- For more information about the Build HAT Python Library see https://buildhat.readthedocs.io/[ReadTheDocs]. --- # Source: py-motors.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Use Motors from Python There are xref:build-hat.adoc#device-compatibility[a number of motors] that work with the Build HAT. ==== Connect a Motor Connect a motor to port A on the Build HAT. The LPF2 connectors need to be inserted the correct way up. If the connector doesn't slide in easily, rotate by 180 degrees and try again. video::images/connect-motor.webm[width="80%"] ==== Work with Motors Start the https://thonny.org/[Thonny IDE]. Add the program code below: [source,python] ---- from buildhat import Motor motor_a = Motor('A') motor_a.run_for_seconds(5) ---- Run the program by clicking the play/run button. If this is the first time you're running a Build HAT program since the Raspberry Pi has booted, there will be a few seconds pause while the firmware is copied across to the board. You should see the red LED extinguish and the green LED illuminate. Subsequent executions of a Python program will not require this pause. video::images/blinking-light.webm[width="80%"] Your motor should turn clockwise for 5 seconds. video::images/turning-motor.webm[width="80%"] Change the final line of your program and re-run. [source,python] ---- motor_a.run_for_seconds(5, speed=50) ---- The motor should now turn faster. Make another change: [source,python] ---- motor_a.run_for_seconds(5, speed=-50) ---- The motor should turn in the opposite (anti-clockwise) direction Create a new program by clicking on the plus button in Thonny. Add the code below: [source,python] ---- from buildhat import Motor motor_a = Motor('A') while True: print("Position: ", motor_a.get_aposition()) ---- Run the program. Grab the motor and turn the shaft. You should see the numbers printed in the Thonny REPL changing. --- # Source: py-sensors.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Use Sensors from Python There is a xref:build-hat.adoc#device-compatibility[large range of sensors] that work with the Build HAT. ==== Work with Sensors Connect a Colour sensor to port B on the Build HAT, and a Force sensor to port C. NOTE: If you're not intending to drive a motor, then you don't need an external power supply and you can use a standard USB power supply for your Raspberry Pi. Create another new program: [source,python] ---- from signal import pause from buildhat import ForceSensor, ColorSensor button = ForceSensor('C') cs = ColorSensor('B') def handle_pressed(force): cs.on() print(cs.get_color()) def handle_released(force): cs.off() button.when_pressed = handle_pressed button.when_released = handle_released pause() ---- Run it and hold a coloured object (LEGO® elements are ideal) in front of the colour sensor and press the Force sensor plunger. The sensor's LED should switch on and the name of the closest colour should be displayed in the Thonny REPL. --- # Source: build-hat.adoc *Note: This file could not be automatically converted from AsciiDoc.* // Intro include::build-hat/introduction.adoc[] include::build-hat/preparing-build-hat.adoc[] // Python include::build-hat/py-installing-software.adoc[] include::build-hat/py-motors.adoc[] include::build-hat/py-sensors.adoc[] // .NET include::build-hat/net-installing-software.adoc[] include::build-hat/net-brick.adoc[] include::build-hat/net-motors.adoc[] include::build-hat/net-sensors.adoc[] // Close out include::build-hat/links-to-other.adoc[] include::build-hat/compat.adoc[] include::build-hat/mech.adoc[] --- # Source: about.adoc *Note: This file could not be automatically converted from AsciiDoc.* == About .The Raspberry Pi Bumper for Raspberry Pi 5 image::images/bumper.jpg[width="80%"] The Raspberry Pi Bumper for Raspberry Pi 5 is a snap-on silicone cover that protects the bottom and edges of the board. When attached, the mounting holes of the Raspberry Pi remain accessible through the bumper. The Bumper is only compatible with Raspberry Pi 5. == Assembly instructions .Assembling the bumper image::images/assembly.png[width="80%"] To attach the Raspberry Pi Bumper to your Raspberry Pi: . Turn off your Raspberry Pi and disconnect the power cable. . Remove the SD card from the SD card slot of your Raspberry Pi. . Align the bumper with the board. . Press the board gently but firmly into the bumper, taking care to avoid contact between the bumper and any of the board’s components. . Insert your SD card back into the SD card slot of your Raspberry Pi. . Reconnect your Raspberry Pi to power. To remove the Raspberry Pi Bumper from your Raspberry Pi: . Turn off your Raspberry Pi and disconnect the power cable. . Remove the SD card from the SD card slot of your Raspberry Pi. . Gently but firmly peel the bumper away from the board, taking care to avoid contact between the bumper and any of the board’s components. . Insert your SD card back into the SD card slot of your Raspberry Pi. . Reconnect your Raspberry Pi to power. --- # Source: bumper.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::bumper/about.adoc[] --- # Source: advanced.adoc *Note: This file could not be automatically converted from AsciiDoc.* :figure-caption!: == Advanced information This section provides information for advanced users. === Mechanical drawings and schematics Mechanical drawings and schematics are available on the Product Information Portal at the following locations: * https://pip.raspberrypi.com/categories/652-raspberry-pi-camera-module-2[Camera Module 2] * https://pip.raspberrypi.com/categories/786-raspberry-pi-camera-module-3[Camera Module 3] * https://pip.raspberrypi.com/categories/659-raspberry-pi-high-quality-camera[HQ Camera Module] * https://pip.raspberrypi.com/categories/810-raspberry-pi-global-shutter-camera[GS Camera Module] NOTE: Board dimensions and mounting-hole positions for Camera Module 3 are identical to Camera Module 2. However, due to changes in the size and position of the sensor module, it isn't mechanically compatible with the camera lid for the Raspberry Pi Zero case. In addition, the following figure shows the schematic for the Raspberry Pi CSI camera connector. === Pinout information Use the information in this section to understand the function of the pins in the camera connectors. ==== Locating pin 1 The location of pin 1 on an FPC connector depends on the hardware. The following descriptions assume you're holding your Raspberry Pi board with the chip and connectors facing up and the Raspberry Pi logo in the correct orientation. For Raspberry Pi Zero boards without the logo on the top side, orient the board with the GPIO along the edge furthest away from you. ** On Raspberry Pi flagship models and Raspberry Pi Zero devices, pin 1 is the pin furthest from you and closest to the GPIO header. ** On Raspberry Pi Compute Module IO boards, pin 1 is marked with a small circle or dot depending on the model. When holding the Raspberry Pi camera board with the lens facing down and the connector facing to your right, pin 1 is the pin closest to you. ==== Camera connector pinout (15-Pin) This is the pinout of the 15-pin Camera Serial Interface (CSI) connector used on flagship Raspberry Pi models prior to Raspberry Pi 5. The connector is compatible with Amphenol SFW15R-2STE1LF. Signal direction is specified from the perspective of the Raspberry Pi board. The I2C lines (SCL and SDA) are pulled up to 3.3 V on the Raspberry Pi board. The function and direction of the GPIO lines depend on the specific Camera Module in use. Typically, `CAM_IO0` is used as an active-high power enable. Some products don't include `CAM_IO1`. |=== | Pin | Name | Description | Direction / Type | 1 | GND | - | Ground | 2 | CAM_DN0 | D-PHY lane 0 (negative) | Input, D-PHY | 3 | CAM_DP0 | D-PHY lane 0 (positive) | Input, D-PHY | 4 | GND | - | Ground | 5 | CAM_DN1 | D-PHY lane 1 (negative) | Input, D-PHY | 6 | CAM_DP1 | D-PHY lane 1 (positive) | Input, D-PHY | 7 | GND | - | Ground | 8 | CAM_CN | D-PHY Clock (negative) | Input, D-PHY | 9 | CAM_CP | D-PHY Clock (positive) | Input, D-PHY |10 | GND | - | Ground |11 | CAM_IO0 | GPIO (for example, Power-Enable) | Bidirectional, 3.3 V |12 | CAM_IO1 | GPIO (for example, Clock, LED) | Bidirectional, 3.3 V |13 | SCL | I2C Clock | Bidirectional, 3.3 V |14 | SDA | I2C Data | Bidirectional, 3.3 V |15 | 3V3 | 3.3 V Supply | Output |=== .Schematic of the Raspberry Pi CSI camera connector. image:images/RPi-S5-conn.png[camera connector, width="65%"] ==== Camera connector pinout (22-Pin) This is the pinout of the 22-pin Camera Serial Interface (CSI) connector used on the Raspberry Pi Zero series, the Compute Module IO boards, and flagship models since Raspberry Pi 5. The connector is compatible with Amphenol F32Q-1A7H1-11022. Signal direction is specified from the perspective of the Raspberry Pi board. The I2C lines (SCL and SDA) are pulled up to 3.3 V on the Raspberry Pi board. The function and direction of the GPIO lines depend on the specific Camera Module in use. Typically, `CAM_IO0` is used as an active-high power enable. Some products don't include `CAM_IO1`. |=== | Pin | Name | Description | Direction / Type | 1 | GND | - | Ground | 2 | CAM_DN0 | D-PHY lane 0 (negative) | Input, D-PHY | 3 | CAM_DP0 | D-PHY lane 0 (positive) | Input, D-PHY | 4 | GND | - | Ground | 5 | CAM_DN1 | D-PHY lane 1 (negative) | Input, D-PHY | 6 | CAM_DP1 | D-PHY lane 1 (positive) | Input, D-PHY | 7 | GND | - | Ground | 8 | CAM_CN | D-PHY Clock (negative) | Input, D-PHY | 9 | CAM_CP | D-PHY Clock (positive) | Input, D-PHY |10 | GND | - | Ground |11 | CAM_DN2 | D-PHY lane 2 (negative) | Input, D-PHY |12 | CAM_DP2 | D-PHY lane 2 (positive) | Input, D-PHY |13 | GND | - | Ground |14 | CAM_DN3 | D-PHY lane 3 (negative) | Input, D-PHY |15 | CAM_DP3 | D-PHY lane 3 (positive) | Input, D-PHY |16 | GND | - | Ground |17 | CAM_IO0 | GPIO (for example, Power-Enable) | Bidirectional, 3.3 V |18 | CAM_IO1 | GPIO (for example, Clock, LED) | Bidirectional, 3.3 V |19 | GND | - | Ground |20 | SCL | I2C Clock | Bidirectional, 3.3 V |21 | SDA | I2C Data | Bidirectional, 3.3 V |22 | 3V3 | 3.3 V Supply | Output |=== --- # Source: cm2.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[camera-module-2]] == Camera Module 2 This 8-megapixel camera is built around the Sony IMX219 sensor with a resolution of 3280 × 2464 pixels. It has adjustable focus and can record an exposure time of up to 11.76 seconds. Camera Module 2 comes in the following variants: * https://www.raspberrypi.com/products/camera-module-v2/[**Standard**.] This version captures visible light only; infrared light is filtered out. * https://www.raspberrypi.com/products/pi-noir-camera-v2/[**NoIR**.] This version doesn't have an infrared filter; it captures both visible light and infrared light. Use it with infrared lighting to see in the dark or use it with the included square of blue gel to monitor the health of green plants. For detailed information about the hardware characteristics and capabilities of this camera, see the xref:../accessories/camera.adoc#hardware-specification[hardware specifications]. --- # Source: cm3.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[camera-module-3]] == Camera Module 3 This https://www.raspberrypi.com/products/camera-module-3/[12-megapixel camera] is built around the Sony IMX708 sensor with a resolution of 4608 × 2592 pixels. It has https://www.raspberrypi.com/news/new-autofocus-camera-modules/[powered autofocus] and can record an exposure time of up to 112 seconds. Camera Module 3 comes in the following variants: * Standard, normal field of view (FoV) * Standard, wide FoV * NoIR, normal FoV * NoIR, wide FoV The standard variants capture visible light only; infrared light is filtered out. The NoIR variants don't have an infrared filter; they capture both visible light and infrared light. Use a NoIR camera with infrared lighting to see in the dark or use it with the included square of blue gel to monitor the health of green plants. .Camera Module 3 (left) and Camera Module 3 Wide (right) image::images/cm3.jpg[Camera Module 3 normal and wide angle] .Camera Module 3 NoIR (left) and Camera Module 3 NoIR Wide (right) image::images/cm3_noir.jpg[Camera Module 3 NoIR normal and wide angle] For detailed information about the hardware characteristics and capabilities of this camera, see the xref:../accessories/camera.adoc#hardware-specification[hardware specifications]. NOTE: There is https://github.com/raspberrypi/libcamera/issues/43[some evidence] to suggest that the Camera Module 3 might emit RFI at a harmonic of the CSI clock rate. This RFI is in a range to interfere with GPS L1 frequencies (1575 MHz). For details and proposed workarounds, see the https://github.com/raspberrypi/libcamera/issues/43[thread on Github]. === Transmission characteristics The IMX708 sensor in Camera Module 3 has the following spectral sensitivity characteristics. image::images/cm3-filter.png[Camera Module 3 Transmission Graph, width="65%"] // Do we need another one of these for the NoIR characteristics? --- # Source: external_trigger.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[external-trigger]] == External Trigger on the Global Shutter Camera The Global Shutter (GS) camera can be triggered externally by pulsing the external trigger connection on the board (denoted on the board as XTR). Multiple cameras can be connected to the same pulse, allowing for an alternative way to synchronise two cameras. The exposure time is equal to the low pulse-width time plus an additional 14.26us. i.e. a low pulse of 10000us leads to an exposure time of 10014.26us. Framerate is directly controlled by how often you pulse the pin. A PWM frequency of 30Hz will lead to a framerate of 30 frames per second. image::images/external_trigger.jpg[alt="Image showing pulse format",width="80%"] === Preparation WARNING: This modification includes removing an SMD soldered part. You should not attempt this modification unless you feel you are competent to complete it. When soldering to the Camera board, please remove the plastic back cover to avoid damaging it. If your board has transistor Q2 fitted (shown in blue on the image below), then you will need to remove R11 from the board (shown in red). This connects GP1 to XTR and without removing R11, the camera will not operate in external trigger mode. The location of the components is displayed below. image::images/resistor.jpg[alt="Image showing resistor to be removed",width="80%"] Next, solder a wire to the touchpoints of XTR and GND on the GS Camera board. Note that XTR is a 1.8V input, so you may need a level shifter or potential divider. We can use a Raspberry Pi Pico to provide the trigger. Connect any Pico GPIO pin (GP28 is used in this example) to XTR via a 1.5 kΩ resistor. Also connect a 1.8 kΩ resistor between XTR and GND to reduce the high logic level to 1.8V. A wiring diagram is shown below. image::images/pico_wiring.jpg[alt="Image showing Raspberry Pi Pico wiring",width="50%"] ==== Raspberry Pi Pico MicroPython Code [source,python] ---- from machine import Pin, PWM from time import sleep pwm = PWM(Pin(28)) framerate = 30 shutter = 6000 # In microseconds frame_length = 1000000 / framerate pwm.freq(framerate) pwm.duty_u16(int((1 - (shutter - 14) / frame_length) * 65535)) ---- The low pulse width is equal to the shutter time, and the frequency of the PWM equals the framerate. NOTE: In this example, Pin 28 connects to the XTR touchpoint on the GS camera board. === Camera driver configuration This step is only necessary if you have more than one camera with XTR wired in parallel. Edit `/boot/firmware/config.txt`. Change `camera_auto_detect=1` to `camera_auto_detect=0`. Append this line: [source] ---- dtoverlay=imx296,always-on ---- When using the CAM0 port on a Raspberry Pi 5, CM4 or CM5, append `,cam0` to that line without a space. If both cameras are on the same Raspberry Pi you will need two dtoverlay lines, only one of them ending with `,cam0`. If the external trigger will not be started right away, you also need to increase the libcamera timeout xref:camera.adoc#libcamera-configuration[as above]. === Starting the camera Enable external triggering: [source,console] ---- $ echo 1 | sudo tee /sys/module/imx296/parameters/trigger_mode ---- Run the code on the Pico, then set the camera running: [source,console] ---- $ rpicam-hello -t 0 --qt-preview --shutter 3000 ---- Every time the Pico pulses the pin, it should capture a frame. However, if `--gain` and `--awbgains` are not set, some frames will be dropped to allow AGC and AWB algorithms to settle. NOTE: When running `rpicam-apps`, always specify a fixed shutter duration, to ensure the AGC does not try to adjust the camera's shutter speed. The value is not important, since it is actually controlled by the external trigger pulse. --- # Source: filters.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[ir-filter]] == IR Filter Both the High Quality Camera and Global Shutter Camera contain an IR filter to reduce the camera's sensitivity to infrared light and help outdoor photos look more natural. However, you may remove the filter to: * Enhance colours in certain types of photography, such as images of plants, water, and the sky * Provide night vision in a location that is illuminated with infrared light === Filter Removal WARNING: *This procedure cannot be reversed:* the adhesive that attaches the filter will not survive being lifted and replaced, and while the IR filter is about 1.1 mm thick, it may crack when it is removed. *Removing it will void the warranty on the product*. You can remove the filter from both the HQ and GS cameras. The HQ camera is shown in the demonstration below. image:images/FILTER_ON_small.jpg[width="65%"] IMPORTANT: To protect the sensor when exposed to the air, ensure that you're working in a clean and dust-free environment. . Unscrew the two 1.5 mm hex lock keys on the underside of the main circuit board. Be careful not to let the washers roll away. + image:images/SCREW_REMOVED_small.jpg[width="65%"] . There's a gasket of slightly sticky material between the housing and PCB that requires some force to separate. You may try some ways to weaken the adhesive, such as a little isopropyl alcohol or heat (~20-30°C). . When the adhesive is loose, lift up the board and place it down on a very clean surface. Make sure the sensor doesn't touch the surface. + image:images/FLATLAY_small.jpg[width="65%"] . Face the lens upwards and place the mount on a flat surface. + image:images/SOLVENT_small.jpg[width="65%"] . To minimise the risk of breaking the filter, use a pen top or similar soft plastic item to push down on the filter only at the very edges where the glass attaches to the aluminium. The glue will break and the filter will detach from the lens mount. + image:images/REMOVE_FILTER_small.jpg[width="65%"] . Given that changing lenses will expose the sensor, at this point you could affix a clear filter (for example, OHP plastic) to minimize the chance of dust entering the sensor cavity. . Replace the main housing over the circuit board. Be sure to realign the housing with the gasket, which remains on the circuit board. . Apply the nylon washer first to prevent damage to the circuit board. . Next, fit the steel washer, which prevents damage to the nylon washer. Screw down the two hex lock keys. As long as the washers have been fitted in the correct order, they do not need to be screwed very tightly. + image:images/FILTER_OFF_small.jpg[width="65%"] NOTE: It is likely to be difficult or impossible to glue the filter back in place and return the device to functioning as a normal optical camera. --- # Source: gs.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[gs-camera]] == Global Shutter Camera The 1.6-megapixel https://www.raspberrypi.com/products/raspberry-pi-global-shutter-camera/[Global Shutter Camera] captures fast-moving subjects and minimises distortion. It's built around the Sony IMX296 sensor with a resolution of 1456 × 1088 pixels. The camera comes with a C/CS-mount for compatibility with a broad variety of lenses. For more information, see xref:../accessories/camera.adoc#lenses[Lenses]. You can trigger the Global Shutter (GS) camera by pulsing the external trigger connection on the board. This feature enables you to synchronise multiple Global Shutter Cameras. For more information, see xref:../accessories/camera.adoc#external-trigger[External Trigger on the Global Shutter Camera]. .Global Shutter Camera image::images/gs-camera.jpg[GS Camera] For detailed information about the hardware characteristics and capabilities of this camera, see the xref:../accessories/camera.adoc#hardware-specification[hardware specifications]. === Rolling or Global shutter? Most digital cameras, including our other Camera Modules, use a **rolling shutter**: they scan the image they're capturing line-by-line, then output the results. This can cause distortion effects in some settings. For example, a photo of rotating propeller blades can make the image look as though it's shimmering rather than like an object that's rotating. This is because the propeller blades have had enough time to change position in the tiny moment that the camera has taken to scan across and observe the scene. A **global shutter**, like the one on our Global Shutter Camera Module, doesn't do this. It captures the light from every pixel in the scene at once, so a photograph of something like propeller blades doesn't result in the same distortion. This is useful because it makes fast-moving objects, like propeller blades, easy to capture; we can also synchronise several cameras to take a photo at precisely the same moment in time. Benefits include minimising distortion when capturing stereo images; the human brain is confused if any movement that appears in the left eye hasn't yet appeared in the right eye. The Raspberry Pi Global Shutter Camera can also operate with shorter exposure times – down to 30 µs, given enough light – than a rolling shutter camera, which makes it useful for high-speed photography. NOTE: The Global Shutter Camera's image sensor has a 6.3 mm diagonal active sensing area, which is similar in size to Raspberry Pi's HQ Camera. However, the pixels are larger and can collect more light. Large pixel size and low pixel count are valuable in machine-vision applications; the more pixels a sensor produces, the harder it is to process the image in real time. To get around this, many applications downsize and crop images. This is unnecessary with the Global Shutter Camera and the appropriate lens magnification, where the lower resolution and large pixel size mean an image can be captured natively. === Transmission characteristics The Global Shutter Camera uses a Hoya CM500 infrared filter. Its transmission characteristics are as represented in the following graph. image::images/hoyacm500.png[CM500 Transmission Graph,width="65%"] If you want to enhance the Global Shutter Camera's sensitivity to infrared light, you can remove the infrared filter. This action is permanent and voids the warranty. For more information, see xref:../accessories/camera.adoc#ir-filter[IR filter]. Without its filter, Raspberry Pi Global Shutter Camera has the following transmission characteristics: image::images/gs.png[GS Camera Transmission Graph without IR-Cut filter,width="65%"] --- # Source: hardware_specification.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[hardware-specification]] == Hardware specifications The following table compares the features and capabilities of the Raspberry Pi camera hardware. |=== | | Camera Module 1 | Camera Module 2 | Camera Module 3 | Camera Module 3 Wide | HQ Camera | AI Camera | GS Camera | Size | Around 25 × 24 × 9 mm | Around 25 × 24 × 9 mm | Around 25 × 24 × 11.5 mm | Around 25 × 24 × 12.4 mm | 38 × 38 × 18.4 mm (excluding lens) | 25 × 24 × 11.9 mm | 38 × 38 × 19.8 mm (29.5 mm with adaptor and dust cap) | Weight | 3 g | 3 g | 4 g | 4 g | 30.4 g | 6 g | 34 g (41 g with adaptor and dust cap) | Still resolution | 5 megapixels | 8 megapixels | 11.9 megapixels | 11.9 megapixels | 12.3 megapixels | 12.3 megapixels | 1.58 megapixels | Video modes | 1080p30, 720p60 and 640 × 480p60/90 | 1080p47, 1640 × 1232p41 and 640 × 480p206 | 2304 × 1296p56, 2304 × 1296p30 HDR, 1536 × 864p120 | 2304 × 1296p56, 2304 × 1296p30 HDR, 1536 × 864p120 | 2028 × 1080p50, 2028 × 1520p40 and 1332 × 990p120 | 2028 × 1520p30, 4056 × 3040p10 | 1456 × 1088p60 | Sensor | OmniVision OV5647 | Sony IMX219 | Sony IMX708 | Sony IMX708 | Sony IMX477 | Sony IMX500 | Sony IMX296 | Sensor resolution | 2592 × 1944 pixels | 3280 × 2464 pixels | 4608 × 2592 pixels | 4608 × 2592 pixels | 4056 × 3040 pixels | 4056 × 3040 pixels | 1456 × 1088 pixels | Sensor image area | 3.76 × 2.74 mm | 3.68 × 2.76 mm (4.6 mm diagonal) | 6.45 × 3.63 mm (7.4 mm diagonal) | 6.45 × 3.63 mm (7.4 mm diagonal) | 6.287 × 4.712 mm (7.9 mm diagonal) | 6.287 × 4.712 mm (7.9 mm diagonal) | 6.3 mm diagonal | Pixel size | 1.4 µm × 1.4 µm | 1.12 µm × 1.12 µm | 1.4 µm × 1.4 µm | 1.4 µm × 1.4 µm | 1.55 µm × 1.55 µm | 1.55 µm × 1.55 µm | 3.45 µm × 3.45 µm | Optical size | 1/4" | 1/4" | 1/2.43" | 1/2.43" | 1/2.3" | 1/2.3" | 1/2.9" | Focus | Fixed | Adjustable | Motorised | Motorised | Adjustable | Adjustable | Adjustable | Depth of field | Approx 1 m to ∞ | Approx 10 cm to ∞ | Approx 10 cm to ∞ | Approx 5 cm to ∞ | N/A | Approx 20 cm to ∞ | N/A | Focal length | 3.60 mm ± 0.01 | 3.04 mm | 4.74 mm | 2.75 mm | Depends on lens | 4.74 mm | Depends on lens | Horizontal Field of View (FoV) | 53.50 ± 0.13 degrees | 62.2 degrees | 66 degrees | 102 degrees | Depends on lens | 66 ± 3 degrees | Depends on lens | Vertical Field of View (FoV) | 41.41 ± 0.11 degrees | 48.8 degrees | 41 degrees | 67 degrees | Depends on lens | 52.3 ± 3 degrees | Depends on lens | Focal ratio (F-Stop) | F2.9 | F2.0 | F1.8 | F2.2 | Depends on lens | F1.79 | Depends on lens | Maximum exposure time (seconds) | 3.28 | 11.76 | 112 | 112 | 670.74 | 112 | 15.5 | Lens Mount | N/A | N/A | N/A | N/A | C/CS- or M12-mount | N/A | C/CS | NoIR version available? | Yes | Yes | Yes | Yes | No | No | No |=== --- # Source: hq.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[hq-camera]] == High Quality Camera The 12-megapixel https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/[High Quality Camera] comes with either an M12-mount or a C/CS-mount for compatibility with a broad variety of lenses. For more information, see xref:../accessories/camera.adoc#lenses[Lenses]. The camera is built around the Sony IMX477 sensor with a resolution of 4056 × 3040 pixels. It can record an exposure time of up to 670.74 seconds. You can trigger the High Quality Camera by pulsing the external trigger. This feature enables you to synchronise multiple High Quality Cameras. For more information, see xref:../accessories/camera.adoc#synchronous-captures[Synchronous Captures]. .High Quality Camera, M12-mount (left) and C/CS-mount (right) image::images/hq.jpg[M12- and C/CS-mount versions of the HQ Camera] For detailed information about the hardware characteristics and capabilities of this camera, see the xref:../accessories/camera.adoc#hardware-specification[hardware specifications]. === Transmission characteristics The High Quality Camera uses a Hoya CM500 infrared filter. Its transmission characteristics are as represented in the following graph. image::images/hoyacm500.png[CM500 Transmission Graph,width="65%"] If you want to enhance the High Quality Camera's sensitivity to infrared light, you can remove the infrared filter. This action is permanent and voids the warranty. For more information, see xref:../accessories/camera.adoc#ir-filter[IR filter]. Without its filter, Raspberry Pi High Quality Camera has the following transmission characteristics: image::images/hq.png[High Quality Camera Transmission Graph without IR-Cut filter,width="65%"] --- # Source: install.adoc *Note: This file could not be automatically converted from AsciiDoc.* :figure-caption!: == Install a Raspberry Pi camera //// TODO: Get a new video with more current products. The following video shows how to connect the original camera on the original Raspberry Pi: video::GImeVqHQzsE[youtube,width=80%,height=400px] //// The process for installing a Raspberry Pi camera is broadly the same for all combinations of camera and board. There are some differences in the connectors; these differences are noted in the following steps. === Step 1. Prepare WARNING: Cameras are sensitive to static. Earth yourself prior to handling the PCB. If you don't have an earthing strap, you can touch a sink tap or similar to earth yourself. To complete this procedure, you need the following items: * A Raspberry Pi board with a camera connector. * A Raspberry Pi camera. + Some cameras might come with a small piece of translucent blue plastic film covering the lens. This is only present to protect the lens during shipping. To remove it, gently peel it off. * The appropriate cable to connect the camera and board. ** All Raspberry Pi cameras use the standard, 15-pin connector. ** Raspberry Pi flagship models up to and including Raspberry Pi 4 use the standard, 15-pin connector. For these boards, use the Standard-Standard camera cable provided with your camera. ** Raspberry Pi 5, all Raspberry Pi Zero models, and Compute Module IO boards use the mini, 22-pin connector. For these boards, use the https://www.raspberrypi.com/products/camera-cable/[Standard-Mini camera cable]. ** Some Compute Module Development Kits come with a Compute Module Camera and Display Adaptor (CMCDA) board, which converts the mini, 22-pin connector on the IO board into a standard, 15-pin connector. // Note: maybe do this as a table/matrix when the camera with the mini 22-pin is released === Step 2. Connect the cable to your Raspberry Pi NOTE: If you intend to use the Active Cooler with Raspberry Pi 5, consider connecting the cable to the camera connector on your Raspberry Pi device before installing the Active Cooler. With the Active Cooler in place, accessing the camera connectors can be awkward. . Shut down your Raspberry Pi and disconnect it from power. . Locate the camera connector on your Raspberry Pi board. + The following descriptions assume that you're holding your Raspberry Pi board with the chip and connectors facing up and the Raspberry Pi logo and board name in the correct orientation. For Raspberry Pi Zero boards without the logo on the top side, orient the board with the GPIO header along the edge furthest away from you. + ** On Raspberry Pi Zero devices, the camera connector is on the short edge to the right, opposite the SD card slot. ** On Raspberry Pi flagship models prior to Raspberry Pi 4, the camera connector is by the edge closest to you between the HDMI connector and the audio jack. It's labelled CAMERA. ** On Raspberry Pi 4, the camera connector is by the edge closest to you between the micro HDMI connector and the audio jack. It's labelled CAMERA. ** On Raspberry Pi 5, the two camera and display connectors are by the edge closest to you between the micro HDMI connector and the Ethernet port. They're labelled CAM/DISP0 and CAM/DISP1. You can use either of these connectors for your camera. ** On Raspberry Pi Compute Module 1/3/3+ IO board, the two camera connectors are on the left edge (the edge closest to the logo on the IO board) at the end closest to you. They're labelled CAM0 and CAM1. You can use either of these connectors for your camera. ** On Raspberry Pi Compute Module 4 IO board, the two camera connectors are by the far left corner. They are labelled CAM0 and CAM1. You can use either of these connectors for your camera. ** On Raspberry Pi Compute Module 5 IO board, the two camera connectors are at the left end of the furthest edge. They are labelled CAM/DISP0 and CAM/DISP1. You can use either of these connectors for your camera. . Open the flap on the connector. .. If there is a strip of film holding the connector flap closed, remove it. .. Gently pull the flap out from the connector until you feel it stop. There is now some freedom of movement in the flap. .. Tilt the flap slightly away from the connector opening. . Insert the end of your camera cable with the metallic contacts facing away from the flap. + Ensure the camera cable is inserted firmly into the connector and is seated straight to correctly align all contacts. Take care not to bend the flexible cable at a sharp angle. . Close the connector flap by tilting it back towards the cable and pushing it down into the connector until you feel it click into place. + The flap holds the cable in place and ensures good contact between the connector pins and the metallic contacts of the cable. You can remove a cable from the connector by reversing these steps. === Step 3. Connect the cable to the camera Our cameras come with the Standard-Standard cable already attached. If you have removed this cable or want to switch to using a different cable, complete the following steps. The camera connector is on the opposite side of the board to the camera lens. Hold the camera with the lens facing down or away from you. . Open the flap on the connector. .. Gently pull the flap out from the connector until you feel it stop. There is now some freedom of movement in the flap. .. Tilt the flap slightly away from the connector opening. . Insert the end of your camera cable with the metallic contacts facing away from the flap and towards the camera board. + Ensure the camera cable is inserted firmly into the connector and is seated straight to correctly align all contacts. Take care not to bend the flexible cable at a sharp angle. . Close the connector flap by tilting it back towards the cable and pushing it down into the connector until you feel it click into place. + The flap holds the cable in place and ensures good contact between the connector pins and the metallic contacts of the cable. You can remove a cable from the connector by reversing these steps. === Step 4. Prepare the software . Reconnect your Raspberry Pi device to power and turn it on. . Ensure that your kernel and applications are all up to date by following the instructions on xref:../computers/os.adoc#update-software[keeping your operating system up to date]. . Follow the setup instructions for xref:../computers/camera_software.adoc#rpicam-apps[`rpicam-apps`]. . (Optional) If you want to use the Picamera2 Python library, follow the setup instructions for https://datasheets.raspberrypi.com/camera/picamera2-manual.pdf[Picamera2 Python library]. // TODO should the picamera2 manual info be in the HTML docs instead/as well? Also. Camera software stuff could follow in here rather than being in computers section? --- # Source: intro.adoc *Note: This file could not be automatically converted from AsciiDoc.* :figure-caption!: == About the Camera Modules There are several official Raspberry Pi camera modules. - *Camera Module 1.* A 5-megapixel camera that came in standard (visible light) and NoIR (visible light plus infrared) versions with a standard field of view (FoV). This device is no longer available from Raspberry Pi. - xref:../accessories/camera.adoc#camera-module-2[*Camera Module 2.*] A 8-megapixel camera available in standard and NoIR versions with a standard FoV. - xref:../accessories/camera.adoc#camera-module-3[*Camera Module 3.*] A 12-megapixel camera available in standard and NoIR versions. Both the standard and NoIR versions come with standard and wide FoV for a total of four different variants. - xref:../accessories/camera.adoc#hq-camera[*High Quality Camera.*] A 12-megapixel camera that comes with CS- or M12-mount variants for use with external lenses. This model is unavailable as a NoIR version. - xref:../accessories/ai-camera.adoc[*AI Camera.*] A 12-megapixel camera that provides low-latency and high-performance AI capabilities to any camera application. Tight integration with xref:../computers/camera_software.adoc[Raspberry Pi's camera software stack] enables users to deploy their own neural network models with minimal effort. This model is unavailable as a NoIR version. - xref:../accessories/camera.adoc#gs-camera[*Global Shutter Camera.*] A 1.5-megapixel camera that uses a global shutter mechanism. It captures light from every pixel in the scene at once and is ideal for fast-motion photography. It comes with a CS-mount for use with external lenses. This model is unavailable as a NoIR version. To compare the hardware characteristics of these cameras, see the xref:../accessories/camera.adoc#hardware-specification[hardware specifications]. NOTE: Raspberry Pi Camera Modules are compatible with all Raspberry Pi computers with CSI connectors. For information about the camera software, see the xref:../computers/camera_software.adoc[camera software documentation]. --- # Source: lens.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[lenses]] == Recommended Lenses The following lenses are recommended for use with our HQ and GS cameras. NOTE: While the HQ Camera is available in both C/CS- and M12-mount versions, the GS Camera is available only with a C/CS-mount. === C/CS Lenses We recommend two lenses, a 6 mm wide angle lens and a 16 mm telephoto lens manufactured by CGL Electronics Co. Ltd. These lenses should be available from your nearest https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/[Authorised Reseller]. [cols="1,1,1,1"] |=== 2+| | 16 mm telephoto | 6 mm wide angle 2+| Resolution | 10 MP | 3 MP 2+| Image format | 1" | 1/2" 2+| Aperture | F1.4 to F16 | F1.2 2+| Mount | C | CS .2+| Field of View H°×V° (D°) | HQ | 22.2°×16.7° (27.8°)| 55°×45° (71°) | GS| 17.8°×13.4° (22.3) | 45°×34° (56°) 2+| Back focal length | 17.53 mm | 7.53 mm 2+| M.O.D. | 0.2m | 0.2m 2+| Dimensions | φ39×50 mm | φ30×34 mm |=== === M12 Lenses image::images/m12-lens.jpg[] We recommend three lenses manufactured by https://www.gaojiaoptotech.com/[Gaojia Optotech]. These lenses should be available from your nearest https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/[Authorised Reseller]. [cols="1,1,1,1,1"] |=== 2+| | 8 mm | 25 mm | Fish Eye 2+| Resolution | 12 MP | 5 MP | 15 MP 2+| Image format | 1/1.7" | 1/2" | 1/2.3" 2+| Aperture | F1.8 | F2.4 | F2.5 2+| Mount 3+| M12 2+| HQ Field of View H°×V° (D°) | 49°×36° (62°) | 14.4°×10.9° (17.9)° | 140°×102.6° (184.6°) |=== --- # Source: synchronous_cameras.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[synchronous-captures]] == Synchronous Captures The High Quality (HQ) Camera supports synchronous captures. One camera (the "source") can be configured to generate a pulse on its XVS (Vertical Sync) pin when a frame capture is initiated. Other ("sink") cameras can listen for this pulse, and capture a frame at the same time as the source camera. This method is largely superseded by xref:../computers/camera_software.adoc#software-camera-synchronisation[software camera synchronisation] which can operate over long distances without additional wires and has sub-millisecond accuracy. But when cameras are physically close, wired synchronisation may be used. NOTE: You can also operate Global Shutter (GS) Cameras in synchronous mode. However, the source camera records one extra frame. Instead, for GS Cameras we recommend using an xref:camera.adoc#external-trigger[external trigger source]. You can't synchronise a GS Camera and an HQ Camera. === Connecting the cameras Solder a wire to the XVS test point of each camera, and connect them together. Solder a wire to the GND test point of each camera, and connect them together. *For GS Cameras only,* you must also connect the XHS (Horizontal Sync) test point of each camera together. On any GS Camera that you wish to act as a sink, bridge the two halves of the MAS pad with solder. NOTE: An earlier version of this document recommended an external pull-up for XVS. This is no longer recommended. Instead, ensure you have the latest version of Raspberry Pi OS and set the `always-on` property for all connected cameras. === Driver configuration Configure the camera drivers to keep their 1.8 V power supplies on when not streaming, and optionally to select the source and sink roles. ==== For the HQ Camera Edit `/boot/firmware/config.txt`. Change `camera_auto_detect=1` to `camera_auto_detect=0`. Append this line for a source camera: [source] ---- dtoverlay=imx477,always-on,sync-source ---- Or for a sink: [source] ---- dtoverlay=imx477,always-on,sync-sink ---- When using the CAM0 port on a Raspberry Pi 5, CM4 or CM5, append `,cam0` to that line without a space. If two cameras are on the same Raspberry Pi, you need two `dtoverlay` lines, only one of them ending with `,cam0`. Alternatively, if you wish to swap the cameras' roles at runtime (and they are not both connected to the same Raspberry Pi), omit `,sync-source` or `,sync-sink` above. Instead you can set a module parameter before starting each camera: For the Raspberry Pi with the source camera: [source,console] ---- $ echo 1 | sudo tee /sys/module/imx477/parameters/trigger_mode ---- For the Raspberry Pi with the sink camera: [source,console] ---- $ echo 2 | sudo tee /sys/module/imx477/parameters/trigger_mode ---- Do this every time the system is booted. ==== For the GS Camera Edit `/boot/firmware/config.txt`. Change `camera_auto_detect=1` to `camera_auto_detect=0`. For either a source or a sink, append this line: [source] ---- dtoverlay=imx296,always-on ---- When using the CAM0 port on a Raspberry Pi 5, CM4 or CM5, append `,cam0` to that line without a space. If two cameras are on the same Raspberry Pi, you need two `dtoverlay` lines, only one of them ending with `,cam0`. On the GS Camera, the sink role is enabled by the MAS pin and can't be configured by software ("trigger_mode" and "sync-sink" relate to the xref:camera.adoc#external-trigger[external trigger method], and mustn't be set for this method). === Libcamera configuration If the cameras don't all start within 1 second, the `rpicam` applications can time out. To prevent this, edit a configuration file on any Raspberry Pi with sink cameras. On Raspberry Pi 5 or CM5: [source,console] ---- $ cp /usr/share/libcamera/pipeline/rpi/pisp/example.yaml timeout.yaml ---- On other Raspberry Pi models: [source,console] ---- $ cp /usr/share/libcamera/pipeline/rpi/vc4/rpi_apps.yaml timeout.yaml ---- Now edit the copy. In both cases, delete the `#` (comment) from the `"camera_timeout_value_ms":` line, and change the number to `60000` (60 seconds). === Starting the cameras Run the following commands to start the sink: [source,console] ---- $ export LIBCAMERA_RPI_CONFIG_FILE=timeout.yaml $ rpicam-vid --frames 300 --qt-preview -o sink.h264 ---- Wait a few seconds, then run the following command to start the source: [source,console] ---- $ rpicam-vid --frames 300 --qt-preview -o source.h264 ---- Frames should be synchronised. Use `--frames` to ensure the same number of frames are captured, and that the recordings are exactly the same length. Running the sink first ensures that no frames are missed. NOTE: When using the GS camera in synchronous mode, the sink doesn't record exactly the same number of frames as the source. **The source records one extra frame before the sink starts recording**. Because of this, you need to specify that the sink records one less frame with the `--frames` option. --- # Source: camera.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::camera/intro.adoc[] include::camera/install.adoc[] include::camera/cm2.adoc[] include::camera/cm3.adoc[] include::camera/gs.adoc[] include::camera/hq.adoc[] include::camera/lens.adoc[] include::camera/synchronous_cameras.adoc[] include::camera/external_trigger.adoc[] include::camera/filters.adoc[] include::camera/hardware_specification.adoc[] include::camera/advanced.adoc[] --- # Source: display_intro.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Raspberry Pi Touch Display The https://www.raspberrypi.com/products/raspberry-pi-touch-display/[Raspberry Pi Touch Display] is an LCD display that connects to a Raspberry Pi using a DSI connector and GPIO connector. .The Raspberry Pi 7-inch Touch Display image::images/display.png[The Raspberry Pi 7-inch Touch Display, width="70%"] The Touch Display is compatible with all models of Raspberry Pi, except the Zero series and Keyboard series, which lack a DSI connector. The earliest Raspberry Pi models lack appropriate mounting holes, requiring additional mounting hardware to fit the stand-offs on the display PCB. The display has the following key features: * 800×480px RGB LCD display * 24-bit colour * Industrial quality: 140 degree viewing angle horizontal, 120 degree viewing angle vertical * 10-point multi-touch touchscreen * PWM backlight control and power control over I2C interface * Metal-framed back with mounting points for Raspberry Pi display conversion board and Raspberry Pi * Backlight lifetime: 20000 hours * Operating temperature: -20 to +70 degrees centigrade * Storage temperature: -30 to +80 degrees centigrade * Contrast ratio: 500 * Average brightness: 250 cd/m^2^ * Viewing angle (degrees): ** Top - 50 ** Bottom - 70 ** Left - 70 ** Right - 70 * Power requirements: 200mA at 5V typical, at maximum brightness. * Outer dimensions: 192.96 × 110.76 mm * Viewable area: 154.08 × 85.92 mm === Mount the Touch Display You can mount a Raspberry Pi to the back of the Touch Display using its stand-offs and then connect the appropriate cables. You can also mount the Touch Display in a separate chassis if you have one available. The connections remain the same, though you may need longer cables depending on the chassis. .A Raspberry Pi connected to the Touch Display image::images/GPIO_power-500x333.jpg[Image of Raspberry Pi connected to the Touch Display, width="70%"] Connect one end of the Flat Flexible Cable (FFC) to the `RPI-DISPLAY` port on the Touch Display PCB. The silver or gold contacts should face away from the display. Then connect the other end of the FFC to the `DISPLAY` port on the Raspberry Pi. The contacts on this end should face inward, towards the Raspberry Pi. If the FFC is not fully inserted or positioned correctly, you will experience issues with the display. You should always double check this connection when troubleshooting, especially if you don't see anything on your display, or the display shows only a single colour. NOTE: A https://datasheets.raspberrypi.com/display/7-inch-display-mechanical-drawing.pdf[mechanical drawing] of the Touch Display is available for download. === Power the Touch Display We recommend using the Raspberry Pi's GPIO to provide power to the Touch Display. Alternatively, you can power the display directly with a separate micro USB power supply. ==== Power from a Raspberry Pi To power the Touch Display using a Raspberry Pi, you need to connect two jumper wires between the 5V and `GND` pins on xref:../computers/raspberry-pi.adoc#gpio[Raspberry Pi's GPIO] and the 5V and `GND` pins on the display, as shown in the following illustration. .The location of the display's 5V and `GND` pins image::images/display_plugs.png[Illustration of display pins, width="40%"] Before you begin, make sure the Raspberry Pi is powered off and not connected to any power source. Connect one end of the black jumper wire to pin six (`GND`) on the Raspberry Pi and one end of the red jumper wire to pin four (5V). If pin six isn't available, you can use any other open `GND` pin to connect the black wire. If pin four isn't available, you can use any other 5V pin to connect the red wire, such as pin two. .The location of the Raspberry Pi headers image::images/pi_plugs.png[Illustration of Raspberry Pi headers, width="40%"] Next, connect the other end of the black wire to the `GND` pin on the display and the other end of the red wire to the 5V pin on the display. Once all the connections are made, you should see the Touch Display turn on the next time you turn on your Raspberry Pi. Use the other three pins on the Touch Display to connect the display to an original Raspberry Pi 1 Model A or B. Refer to our documentation on xref:display.adoc#legacy-support[legacy support] for more information. NOTE: To identify an original Raspberry Pi, check the GPIO header connector. Only the original model has a 26-pin GPIO header connector; subsequent models have 40 pins. ==== Power from a micro USB supply If you don't want to use a Raspberry Pi to provide power to the Touch Display, you can use a micro USB power supply instead. We recommend using the https://www.raspberrypi.com/products/micro-usb-power-supply/[Raspberry Pi 12.5 W power supply] to make sure the display runs as intended. Do not connect the GPIO pins on your Raspberry Pi to the display if you choose to use micro USB for power. The only connection between the two boards should be the Flat Flexible Cable. WARNING: When using a micro USB cable to power the display, mount it inside a chassis that blocks access to the display's PCB during usage. === Use an on-screen keyboard Raspberry Pi OS _Bookworm_ and later include the Squeekboard on-screen keyboard by default. When a touch display is attached, the on-screen keyboard should automatically show when it is possible to enter text and automatically hide when it is not possible to enter text. For applications which do not support text entry detection, use the keyboard icon at the right end of the taskbar to manually show and hide the keyboard. You can also permanently show or hide the on-screen keyboard in the **Display** tab of Control Centre or the `Display` section of `raspi-config`. TIP: In Raspberry Pi OS releases prior to _Bookworm_, use `matchbox-keyboard` instead. If you use the wayfire desktop compositor, use `wvkbd` instead. === Change screen orientation If you want to physically rotate the display, or mount it in a specific position, select **Screen Configuration** from the **Preferences** menu. Right-click on the touch display rectangle (likely DSI-1) in the layout editor, select **Orientation**, then pick the best option to fit your needs. image::images/display-rotation.png[Screenshot of orientation options in screen configuration, width="80%"] ==== Rotate screen without a desktop To set the screen orientation on a device that lacks a desktop environment, edit the `/boot/firmware/cmdline.txt` configuration file to pass an orientation to the system. Add the following line to `cmdline.txt`: [source,ini] ---- video=DSI-1:800x480@60,rotate= ---- Replace the `` placeholder with one of the following values, which correspond to the degree of rotation relative to the default on your display: * `0` * `90` * `180` * `270` For example, a rotation value of `90` rotates the display 90 degrees to the right. `180` rotates the display 180 degrees, or upside-down. NOTE: It is not possible to rotate the DSI display separately from the HDMI display with `cmdline.txt`. When you use DSI and HDMI simultaneously, they share the same rotation value. ==== Rotate touch input WARNING: Rotating touch input via device tree can cause conflicts with your input library. Whenever possible, configure touch event rotation in your input library or desktop. Rotation of touch input is independent of the orientation of the display itself. To change this you need to manually add a `dtoverlay` instruction in xref:../computers/config_txt.adoc[`/boot/firmware/config.txt`]. Add the following line at the end of `config.txt`: [source,ini] ---- dtoverlay=vc4-kms-dsi-7inch,invx,invy ---- Then, disable automatic display detection by removing the following line from `config.txt`, if it exists: [source,ini] ---- display_auto_detect=1 ---- ==== Touch Display device tree option reference The `vc4-kms-dsi-7inch` overlay supports the following options: |=== | DT parameter | Action | `sizex` | Sets X resolution (default 800) | `sizey` | Sets Y resolution (default 480) | `invx` | Invert X coordinates | `invy` | Invert Y coordinates | `swapxy` | Swap X and Y coordinates | `disable_touch` | Disables the touch overlay totally |=== To specify these options, add them, separated by commas, to your `dtoverlay` line in `/boot/firmware/config.txt`. Boolean values default to true when present, but you can set them to false using the suffix "=0". Integer values require a value, e.g. `sizey=240`. For instance, to set the X resolution to 400 pixels and invert both X and Y coordinates, use the following line: [source,ini] ---- dtoverlay=vc4-kms-dsi-7inch,sizex=400,invx,invy ---- === Installation on Compute Module based devices. All Raspberry Pi SBCs auto-detect the official Touch Displays as the circuitry connected to the DSI connector on the Raspberry Pi board is fixed; this autodetection ensures the correct Device Tree entries are passed to the kernel. However, Compute Modules are intended for industrial applications where the integrator can use any and all GPIOs and interfaces for whatever purposes they require. Autodetection is therefore not feasible, and hence is disabled on Compute Module devices. This means that the Device Tree fragments required to set up the display need to be loaded via some other mechanism, which can be either with a dtoverlay entry in config.txt as described above, via a custom base DT file, or if present, a HAT EEPROM. --- # Source: legacy.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Legacy Support WARNING: These instructions are for the original Raspberry Pi, Model A, and B, boards only. To identify an original Raspberry Pi, check the GPIO header connector. Only the original model has a 26-pin GPIO header connector; subsequent models have 40 pins. The DSI connector on both the Raspberry Pi 1 Model A and B boards does not have the I2C connections required to talk to the touchscreen controller and DSI controller. To work around this, use the additional set of jumper cables provided with the display kit. Connect SCL/SDA on the GPIO header to the horizontal pins marked SCL/SDA on the display board. Power the Model A/B via the GPIO pins using the jumper cables. DSI display autodetection is disabled by default on these boards. To enable detection, add the following line to the xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] file: [source,ini] ---- ignore_lcd=0 ---- Power the setup via the `PWR IN` micro-USB connector on the display board. Do not power the setup via the Raspberry Pi's micro-USB port. This will exceed the input polyfuse's maximum current rating, since the display consumes approximately 400mA. --- # Source: display.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::display/display_intro.adoc[] include::display/legacy.adoc[] --- # Source: connecting-things.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Connecting it all Together This is the configuration we recommend for using your Raspberry Pi, official keyboard and hub, and official mouse together. The hub on the keyboard ensures easy access to USB drives, and the mouse's cable is tidy, while being long enough to allow you to use the mouse left- or right-handed. image::images/everything.png[width="80%"] NOTE: It is important that the power supply is connected to the Raspberry Pi and the keyboard is connected to the Raspberry Pi. If the power supply were connected to the keyboard, with the Raspberry Pi powered via the keyboard, then the keyboard would not operate correctly. --- # Source: getting-started-keyboard.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Getting Started with your Keyboard Our official keyboard includes three host USB ports for connecting external devices, such as USB mice, USB drives, and other USB- controlled devices. The product's micro USB port is for connection to the Raspberry Pi. Via the USB hub built into the keyboard, the Raspberry Pi controls, and provides power to, the three USB Type A ports. image::images/back-of-keyboard.png[width="80%"] === Keyboard Features The Raspberry Pi keyboard has three lock keys: `Num Lock`, `Caps Lock`, and `Scroll Lock`. There are three LEDs in the top right-hand corner that indicate which locks are enabled. image::images/num-cap-scroll.png[width="80%"] `Num Lock`:: Allows use of the red number keys on the letter keys, effectively creating a numeric keypad. This mode is enabled and disabled by pressing the `Num Lock` key. `Caps Lock`:: Allows typing capital letters; press the `Shift` key to type lower-case letters in this mode. This mode is enabled and disabled by pressing the `Caps Lock` key. `Scroll Lock (ScrLk)`:: Allows use of the cursor keys for browsing web pages and spreadsheets without the mouse. This mode is enabled and disabled by pressing the `ScrLk` key while holding the Fn key. --- # Source: getting-started-mouse.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Getting Started with your Mouse Our official mouse has three buttons, which activate high-quality micro-switches. The wheel is for quick scrolling when browsing documents and web pages. image::images/the-mouse.png[width="80%"] Always place the mouse on a flat, stable surface while using it. The mouse optically detects movement on the surface on which it is placed. On featureless surfaces, e.g. PVC or acrylic table tops, the mouse cannot detect movement. When you are working on such a surface, place the mouse on a mouse mat. --- # Source: keyboard-and-mouse.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::keyboard-and-mouse/getting-started-keyboard.adoc[] include::keyboard-and-mouse/getting-started-mouse.adoc[] include::keyboard-and-mouse/connecting-things.adoc[] --- # Source: about.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[m2-hat-plus]] == About [.clearfix] -- [.left] .The Raspberry Pi M.2 HAT+ image::images/m2-hat-plus.jpg[width="100%"] [.left] .The Raspberry Pi M.2 HAT+ Compact in the Raspberry Pi Case image::images/m2-hat-plus-compact-with-case.jpg[width="100%"] -- The Raspberry Pi M.2 HAT+ M Key and M.2 HAT+ Compact M Key enable you to connect M.2 peripherals such as NVMe drives and other PCIe accessories to Raspberry Pi 5's PCIe interface. The M.2 HAT+ and M.2 HAT+ Compact adapter boards convert between the PCIe connector on Raspberry Pi 5 and a single M.2 M key edge connector. The M.2 HAT+ supports any device that uses the 2230 or 2242 form factor; the M.2 HAT+ Compact supports any device that uses the 2230 form factor. We provide the M.2 HAT+ in a standard and a compact format to serve different use cases: * The M.2 HAT+ includes threaded spacers that provide ample room to fit the Raspberry Pi Active Cooler beneath it. However, the M.2 HAT+ is _only_ compatible with the https://www.raspberrypi.com/products/raspberry-pi-5-case/[Raspberry Pi Case for Raspberry Pi 5] _if you remove the lid and the included fan_. * The M.2 HAT+ Compact is designed to fit around the included fan in the https://www.raspberrypi.com/products/raspberry-pi-5-case/[Raspberry Pi Case for Raspberry Pi 5]. However, you can't fit the Active Cooler beneath it. Both the M.2 HAT+ and M.2 HAT+ Compact conform to the https://datasheets.raspberrypi.com/hat/hat-plus-specification.pdf[Raspberry Pi HAT+ specification], which allows Raspberry Pi OS to automatically detect the HAT+ and any connected devices. == Features The M.2 HAT+ and M.2 HAT+ Compact both have the following features: * Single-lane PCIe 2.0 interface (500 MB/s peak transfer rate) * Support for devices that use the M.2 M key edge connector * Up to 3 A supply to connected M.2 devices * Power and activity LEDs The M.2 HAT+ and M.2 HAT+ Compact differ in the following ways: * M.2 HAT+ supports devices with the 2230 or 2242 form factor; M.2 HAT+ Compact only supports the 2230 form factor. === Hardware The Raspberry Pi M.2 HAT+ or M.2 HAT+ Compact box contains the following parts: * Ribbon cable * Threaded spacers * Screws * 1 knurled double-flanged drive attachment screw to secure and support the M.2 peripheral The M.2 HAT+ also includes a 16 mm GPIO stacking header; M.2 HAT+ Compact doesn't include this component. To use the M.2 HAT+ or M.2 HAT+ Compact, you also need: * A Raspberry Pi 5 [[m2-hat-plus-installation]] == Prepare your Raspberry Pi . Ensure that your Raspberry Pi runs the latest software. Run the following command to update: + [source,console] ---- $ sudo apt update && sudo apt full-upgrade ---- . xref:../computers/raspberry-pi.adoc#update-the-bootloader-configuration[Ensure that your Raspberry Pi firmware is up-to-date]. Run the following command to see what firmware you're running: + [source,console] ---- $ sudo rpi-eeprom-update ---- + If you see December 6, 2023 or a later date, proceed to the next step. If you see a date earlier than December 6, 2023, run the following command to open the Raspberry Pi Configuration CLI: + [source,console] ---- $ sudo raspi-config ---- + Under `Advanced Options` > `Bootloader Version`, choose `Latest`. Then, exit `raspi-config` with `Finish` or the *Escape* key. + Run the following command to update your firmware to the latest version: + [source,console] ---- $ sudo rpi-eeprom-update -a ---- + Then, reboot with `sudo reboot`. . Disconnect the Raspberry Pi from power before beginning installation. [[standard-installation]] == Install the M.2 HAT+ Follow these steps to install the M.2 HAT+. To install the M.2 HAT+ Compact go to <> instead. === (Optional) Install the Active Cooler . The M.2 HAT+ is compatible with the Raspberry Pi 5 Active Cooler. If you have an Active Cooler, install it before installing the M.2 HAT+. + -- image::images/m2-hat-plus-installation-01.png[width="60%"] -- === Install the mounting hardware . Install the spacers using the provided screws. . Firmly press the GPIO stacking header on top of the Raspberry Pi GPIO pins; orientation doesn't matter as long as all pins fit into place. . Disconnect the ribbon cable from the M.2 HAT+. Insert the other end into the PCIe port of your Raspberry Pi. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing inward, towards the USB ports. With the ribbon cable fully and evenly inserted into the PCIe port, push the cable holder down from both sides to secure the ribbon cable firmly in place. -- image::images/m2-hat-plus-installation-02.png[width="60%"] -- === Install the board . Set the M.2 HAT+ on top of the spacers and use the remaining screws to secure it in place. + -- image::images/m2-hat-plus-installation-03.png[width="60%"] -- . Insert the ribbon cable into the slot on the M.2 HAT+. + Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing up. With the ribbon cable fully and evenly inserted into the port, push the cable holder down from both sides to secure the ribbon cable firmly in place. + -- image::images/m2-hat-plus-installation-04.png[width="60%"] -- === Install your M.2 drive . Remove the drive attachment screw by turning the screw counter-clockwise. Insert your M.2 SSD into the M.2 key edge connector, sliding the drive into the slot at a slight upward angle. Do not force the drive into the slot: it should slide in gently. + -- image::images/m2-hat-plus-installation-05.png[width="60%"] -- . Push the notch on the drive attachment screw into the slot at the end of your M.2 drive. Push the drive flat against the M.2 HAT+, and insert the SSD attachment screw by turning the screw clockwise until the SSD feels secure. Do not over-tighten the screw. + -- image::images/m2-hat-plus-installation-06.png[width="60%"] -- Congratulations, you have successfully installed the M.2 HAT+. .Installed M.2 HAT+ image::images/m2-hat-plus-installation-07.png[width="80%"] [[compact-installation]] == Install the M.2 HAT+ Compact Follow these steps to install the M.2 HAT+ Compact. To install the M.2 HAT+ go to <> instead. === Install the mounting hardware . Install the spacers using the provided screws. + -- image::images/m2-hat-plus-compact-installation-02.png[width="60%"] -- === Install the board . Set the M.2 HAT+ Compact on top of the spacers and use the remaining screws to secure it in place. + -- image::images/m2-hat-plus-compact-installation-03.png[width="60%"] -- . Insert the ribbon cable into the PCIe port of your Raspberry Pi. + Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing inward, towards the USB ports. With the ribbon cable fully and evenly inserted into the PCIe port, push the cable holder down from both sides to secure the ribbon cable firmly in place. + -- image::images/m2-hat-plus-compact-installation-04.png[width="60%"] -- === Install your M.2 drive . Remove the drive attachment screw by turning the screw counter-clockwise. Insert your M.2 SSD into the M.2 key edge connector, sliding the drive into the slot at a slight upward angle. Do not force the drive into the slot: it should slide in gently. + -- image::images/m2-hat-plus-compact-installation-05.png[width="60%"] -- . Push the notch on the drive attachment screw into the slot at the end of your M.2 drive. Push the drive flat against the M.2 HAT+ Compact, and insert the SSD attachment screw by turning the screw clockwise until the SSD feels secure. Do not over-tighten the screw. + -- image::images/m2-hat-plus-compact-installation-06.png[width="60%"] -- Congratulations, you have successfully installed the M.2 HAT+ Compact. .Installed M.2 HAT+ Compact image::images/m2-hat-plus-compact-installation-07.png[width="80%"] == Start your Raspberry Pi . Connect your Raspberry Pi to power; Raspberry Pi OS automatically detects the M.2 HAT+ or M.2 HAT+ Compact. If you use Raspberry Pi Desktop, you see an icon representing the drive on your desktop. If you don't use a desktop, you can find the drive at `/dev/nvme0n1`. . To make your drive automatically available for file access, consider xref:../computers/configuration.adoc#automatically-mount-a-storage-device[configuring automatic mounting]. WARNING: Always disconnect your Raspberry Pi from power before connecting or disconnecting a device from the M.2 slot. == Boot from NVMe To boot from an NVMe drive attached to the M.2 HAT+ or M.2 HAT+ Compact, complete the following steps: . xref:../computers/getting-started.adoc#raspberry-pi-imager[Install an operating system to your NVMe drive by using Raspberry Pi Imager]. You can do this from your Raspberry Pi if you already have an SD card with a Raspberry Pi OS image. . Reboot your Raspberry Pi. * If you don't have an SD card inserted in your Raspberry Pi 5, it boots automatically from your NVMe drive. * If you do have an SD card inserted in your Raspberry Pi 5, it attempts to boot from the SD card first. You can change the boot order on your Raspberry Pi by completing the following steps: .. Boot your Raspberry Pi into Raspberry Pi OS using an SD card. .. In a terminal on your Raspberry Pi, run `sudo raspi-config` to open the Raspberry Pi Configuration CLI. .. Under `Advanced Options` > `Boot Order`, choose `NVMe/USB boot`. .. Exit `raspi-config` with `Finish` or the *Escape* key. .. Reboot your Raspberry Pi with `sudo reboot`. For more information, see xref:../computers/raspberry-pi.adoc#nvme-ssd-boot[NVMe boot]. == Enable PCIe Gen 3 WARNING: The Raspberry Pi 5 is not certified for Gen 3.0 speeds. PCIe Gen 3.0 connections may be unstable. To enable PCIe Gen 3 speeds, follow the instructions at xref:../computers/raspberry-pi.adoc#pcie-gen-3-0[enable PCIe Gen 3.0]. == Schematics The schematics for the M.2 HAT+ are available as a https://datasheets.raspberrypi.com/m2-hat-plus/raspberry-pi-m2-hat-plus-schematics.pdf[PDF] == Product brief For more information about the M.2 HAT+ and M.2 HAT+ Compact, including mechanical specifications and operating environment limitations, see the https://datasheets.raspberrypi.com/m2-hat-plus/raspberry-pi-m2-hat-plus-product-brief.pdf[product brief]. --- # Source: m2-hat-plus.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::m2-hat-plus/about.adoc[] --- # Source: monitor_intro.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Raspberry Pi Monitor The https://www.raspberrypi.com/products/raspberry-pi-monitor/[Raspberry Pi Monitor] is a 15.6" 1920 × 1080p IPS LCD display that connects to a computer using an HDMI cable. The Monitor also requires a USB-C power source. For full brightness and volume range, this must be a USB-PD source capable of at least 1.5A of current. .The Raspberry Pi Monitor image::images/monitor-hero.png[The Raspberry Pi Monitor, width="100%"] The Monitor is compatible with all models of Raspberry Pi that support HDMI output. === Controls The back of the Monitor includes the following controls: * a button that enters and exits Standby mode (indicated by the ⏻ (power) symbol) * buttons that increase and decrease display brightness (indicated by the 🔆 (sun) symbol) * buttons that increase and decrease speaker volume (indicated by the 🔈 (speaker) symbol) === On screen display messages The on-screen display (OSD) may show the following messages: [cols="1a,6"] |=== | Message | Description | image::images/no-hdmi.png[No HDMI signal detected] | No HDMI signal detected. | image::images/no-valid-hdmi-signal-standby.png[Standby mode] | The monitor will soon enter standby mode to conserve power. | image::images/not-supported-resolution.png[Unsupported resolution] | The output display resolution of the connected device is not supported. | image::images/power-saving-mode.png[Power saving mode] | The monitor is operating in Power Saving mode, with reduced brightness and volume, because the monitor is not connected to a power supply capable of delivering 1.5A of current or greater. |=== Additionally, the OSD shows information about display brightness changes using the 🔆 (sun) symbol, and speaker volume level changes using the 🔈 (speaker) symbol. Both brightness and volume use a scale that ranges from 0 to 100. TIP: If you attempt to exit Standby mode when the display cannot detect an HDMI signal, the red LED beneath the Standby button will briefly light, but the display will remain in Standby mode. === Position the Monitor Use the following approaches to position the Monitor: * Angle the Monitor on the integrated stand. * Mount the Monitor on an arm or stand using the four VESA mount holes on the back of the red rear plastic housing. + IMPORTANT: Use spacers to ensure adequate space for display and power cable egress. * Flip the integrated stand fully upwards, towards the top of the monitor. Use the drill hole template to create two mounting points spaced 55 mm apart. Hang the Monitor using the slots on the back of the integrated stand. + .Drill hole template image::images/drill-hole-template.png[Drill hole template, width="40%"] === Power the Monitor The Raspberry Pi Monitor draws power from a 5V https://en.wikipedia.org/wiki/USB_hardware#USB_Power_Delivery[USB Power Delivery] (USB-PD) power source. Many USB-C power supplies, including the official power supplies for the Raspberry Pi 4 and Raspberry Pi 5, support this standard. When using a power source that provides at least 1.5A of current over USB-PD, the Monitor operates in **Full Power mode**. In Full Power mode, you can use the full range (0%-100%) of display brightness and speaker volume. When using a power source that does _not_ supply at least 1.5A of current over USB-PD (including all USB-A power sources), the Monitor operates in **Power Saving mode**. Power Saving mode limits the maximum display brightness and the maximum speaker volume to ensure reliable operation. In Power Saving mode, you can use a limited range (0-50%) of display brightness and a limited range (0-60%) of speaker volume. When powered from a Raspberry Pi, the Monitor operates in Power Saving mode, since Raspberry Pi devices cannot provide 1.5A of current over a USB-A connection. To switch from Power Saving mode to Full Power mode, press and hold the *increase brightness* button for 3 seconds. [TIP] ==== If the Monitor flashes on and off, your USB power supply is not capable of providing sufficient current to power the monitor. This can happen if you power the Monitor from a Raspberry Pi 5, 500, or 500+ which is itself powered by a 5V/3A power supply. Try the following fixes to stop the Monitor from flashing on and off: * reduce the display brightness and volume (you may have to connect your monitor to another power supply to access the settings) * switch to a different power source or cable ==== === Specification Diagonal: 15.6" Resolution: 1920 × 1080 Type: IPS LCD Colour gamut: 45% Contrast: 800:1 Brightness: 250cd/m^2^ Screen coating: Anti-glare 3H hardness Display area: 344 × 193 mm Dimensions: 237 × 360 × 20 mm Weight: 850g Supported resolutions: * 1920 × 1080p @ 50/60Hz * 1280 × 720p @ 50/60Hz * 720 × 576p @ 50/60Hz * 720 × 480p @ 50/60Hz * 640 × 480p @ 50/60Hz Input: HDMI 1.4; supports DDC-CI Power input: USB-C; requires 1.5A over USB-PD at 5V for full brightness and volume range Power consumption: 4.5-6.5 W during use; < 0.1 W at idle Speakers: 2 × 1.2 W (stereo) Ports: 3.5 mm audio jack === Mechanical drawing .Mechanical Drawing image::images/mechanical-drawing.png[Mechanical drawing, width="80%"] --- # Source: monitor.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::monitor/monitor_intro.adoc[] --- # Source: about.adoc *Note: This file could not be automatically converted from AsciiDoc.* == About .A Raspberry Pi SD Card inserted into a Raspberry Pi 5 image::images/sd-hero.jpg[width="80%"] SD card quality is a critical factor in determining the overall user experience for a Raspberry Pi. Slow bus speeds and lack of command queueing can reduce the performance of even the most powerful Raspberry Pi models. Raspberry Pi's official microSD cards support DDR50 and SDR104 bus speeds. Additionally, Raspberry Pi SD cards support the command queueing (CQ) extension, which permits some pipelining of random read operations, ensuring optimal performance. You can even buy Raspberry Pi SD cards pre-programmed with the latest version of Raspberry Pi OS. Raspberry Pi SD cards are available in the following sizes: * 32 GB * 64 GB * 128 GB == Specifications .A 128 GB Raspberry Pi SD Card image::images/sd-cards.png[width="80%"] Raspberry Pi SD cards use the SD6.1 SD specification. Raspberry Pi SD cards use the microSDHC/microSDXC form factor. Raspberry Pi SD cards have the following Speed Class ratings: C10, U3, V30, A2. The following table describes the read and write speeds of Raspberry Pi SD cards using 4 kB of random data: |=== | Raspberry Pi Model | Interface | Read Speed | Write Speed | 4 | DDR50 | 3,200 IOPS | 1,200 IOPS | 5 | SDR104 | 5,000 IOPS | 2,000 IOPS |=== --- # Source: sd-cards.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::sd-cards/about.adoc[] --- # Source: hardware.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Features The Sense HAT has an 8×8 RGB LED matrix and a five-button joystick, and includes the following sensors: * Gyroscope * Accelerometer * Magnetometer * Temperature * Barometric pressure * Humidity * Colour and brightness Schematics and mechanical drawings for the Sense HAT and the Sense HAT V2 are available for download. * https://datasheets.raspberrypi.com/sense-hat/sense-hat-schematics.pdf[Sense HAT V1 schematics]. * https://datasheets.raspberrypi.com/sense-hat/sense-hat-v2-schematics.pdf[Sense HAT V2 schematics]. * https://datasheets.raspberrypi.com/sense-hat/sense-hat-mechanical-drawing.pdf[Sense HAT mechanical drawings]. === LED matrix The LED matrix is an RGB565 https://www.kernel.org/doc/Documentation/fb/framebuffer.txt[framebuffer] with the id `RPi-Sense FB`. The appropriate device node can be written to as a standard file or mmap-ed. The included snake example shows how to access the framebuffer. === Joystick The joystick comes up as an input event device named `Raspberry Pi Sense HAT Joystick`, mapped to the arrow keys and **Enter**. It should be supported by any library which is capable of handling inputs, or directly through the https://www.kernel.org/doc/Documentation/input/input.txt[evdev interface]. Suitable libraries include SDL, http://www.pygame.org/docs/[pygame] and https://python-evdev.readthedocs.org/en/latest/[python-evdev]. The included `snake` example shows how to access the joystick directly. --- # Source: intro.adoc *Note: This file could not be automatically converted from AsciiDoc.* == About The https://www.raspberrypi.com/products/sense-hat/[Raspberry Pi Sense HAT] is an add-on board that gives your Raspberry Pi an array of sensing capabilities. The on-board sensors allow you to monitor pressure, humidity, temperature, colour, orientation, and movement. The 8×8 RGB LED matrix allows you to visualise data from the sensors. The five-button joystick lets users interact with your projects. image::images/Sense-HAT.jpg[width="70%"] The Sense HAT was originally developed for use on the International Space Station as part of the educational https://astro-pi.org/[Astro Pi] programme run by the https://raspberrypi.org[Raspberry Pi Foundation] in partnership with the https://www.esa.int/[European Space Agency]. It can help with any project that requires position, motion, orientation, or environmental sensing. An officially supported xref:sense-hat.adoc#use-the-sense-hat-with-python[Python library] provides access to the on-board sensors, LED matrix, and joystick. The Sense HAT is compatible with any Raspberry Pi device with a 40-pin GPIO header. --- # Source: software.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Install In order to work correctly, the Sense HAT requires: * an up-to-date kernel * https://en.wikipedia.org/wiki/I%C2%B2C[I2C] enabled on your Raspberry Pi * a few dependencies Complete the following steps to get your Raspberry Pi device ready to connect to the Sense HAT: . First, ensure that your Raspberry Pi runs the latest software. Run the following command to update: + [source,console] ---- $ sudo apt update && sudo apt full-upgrade ---- . Next, install the `sense-hat` package, which will ensure the kernel is up to date, enable I2C, and install the necessary dependencies: + [source,console] ---- $ sudo apt install sense-hat ---- . Finally, reboot your Raspberry Pi to enable I2C and load the new kernel, if it changed: + [source,console] ---- $ sudo reboot ---- == Calibrate Install the necessary software and run the calibration program as follows: [source,console] ---- $ sudo apt update $ sudo apt install octave -y $ cd $ cp /usr/share/librtimulib-utils/RTEllipsoidFit ./ -a $ cd RTEllipsoidFit $ RTIMULibCal ---- The calibration program displays the following menu: ---- Options are: m - calibrate magnetometer with min/max e - calibrate magnetometer with ellipsoid (do min/max first) a - calibrate accelerometers x - exit Enter option: ---- Press lowercase `m`. The following message will then show. Press any key to start. ---- Magnetometer min/max calibration ------------------------------- Waggle the IMU chip around, ensuring that all six axes (+x, -x, +y, -y and +z, -z) go through their extrema. When all extrema have been achieved, enter 's' to save, 'r' to reset or 'x' to abort and discard the data. Press any key to start... ---- After it starts, you should see output similar to the following scrolling up the screen: ---- Min x: 51.60 min y: 69.39 min z: 65.91 Max x: 53.15 max y: 70.97 max z: 67.97 ---- Focus on the two lines at the very bottom of the screen, as these are the most recently posted measurements from the program. Now, pick up the Raspberry Pi and Sense HAT and move it around in every possible way you can think of. It helps if you unplug all non-essential cables to avoid clutter. Try and get a complete circle in each of the pitch, roll and yaw axes. Take care not to accidentally eject the SD card while doing this. Spend a few minutes moving the Sense HAT, and stop when you find that the numbers are not changing any more. Now press lowercase `s` then lowercase `x` to exit the program. If you run the `ls` command now, you'll see a new `RTIMULib.ini` file has been created. In addition to those steps, you can also do the ellipsoid fit by performing the steps above, but pressing `e` instead of `m`. When you're done, copy the resulting `RTIMULib.ini` to `/etc/` and remove the local copy in `~/.config/sense_hat/`: [source,console] ---- $ rm ~/.config/sense_hat/RTIMULib.ini $ sudo cp RTIMULib.ini /etc ---- == Getting started After installation, example code can be found under `/usr/src/sense-hat/examples`. === Use the Sense HAT with Python `sense-hat` is the officially supported library for the Sense HAT; it provides access to all of the on-board sensors and the LED matrix. Complete documentation for the library can be found at https://sense-hat.readthedocs.io/en/latest/[sense-hat.readthedocs.io]. === Use the Sense HAT with C++ https://github.com/RPi-Distro/RTIMULib[RTIMULib] is a {cpp} and Python library that makes it easy to use 9-dof and 10-dof IMUs with embedded Linux systems. A pre-calibrated settings file is provided in `/etc/RTIMULib.ini`, which is also copied and used by `sense-hat`. The included examples look for `RTIMULib.ini` in the current working directory, so you may wish to copy the file there to get more accurate data. The RTIMULibDrive11 example comes pre-compiled to help ensure everything works as intended. It can be launched by running `RTIMULibDrive11` and closed by pressing `Ctrl C`. NOTE: The C/{cpp} examples can be compiled by running `make` in the appropriate directory. == Troubleshooting === Read and write EEPROM data These steps are provided for debugging purposes only. NOTE: On Raspberry Pi 2 Model B Rev 1.0 and Raspberry Pi 3 Model B boards, these steps may not work. The firmware will take control of I2C0, causing the ID pins to be configured as inputs. Before you can read and write EEPROM data to and from the Sense HAT, you must complete the following steps: . Enable I2C0 and I2C1 by adding the following line to the xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] file: + [source,ini] ---- dtparam=i2c_vc=on dtparam=i2c_arm=on ---- . Run the following command to reboot: + [source,console] ---- $ sudo reboot ---- . Download and build the flash tool: + [source,console] ---- $ git clone https://github.com/raspberrypi/hats.git $ cd hats/eepromutils $ make ---- ==== Read To read EEPROM data, run the following command: [source,console] ---- $ sudo ./eepflash.sh -f=sense_read.eep -t=24c32 -r ---- ==== Write NOTE: This operation will not damage your Raspberry Pi or Sense HAT, but if an error occurs, your Raspberry Pi may fail to automatically detect the HAT. . First, download EEPROM settings and build the `.eep` binary: + [source,console] ---- $ wget https://github.com/raspberrypi/rpi-sense/raw/master/eeprom/eeprom_settings.txt -O sense_eeprom.txt $ ./eepmake sense_eeprom.txt sense.eep /boot/firmware/overlays/rpi-sense-overlay.dtb ---- . Next, disable write protection: + [source,console] ---- $ i2cset -y -f 1 0x46 0xf3 1 ---- . Write the EEPROM data: + [source,console] ---- $ sudo ./eepflash.sh -f=sense.eep -t=24c32 -w ---- . Finally, re-enable write protection: + [source,console] ---- $ i2cset -y -f 1 0x46 0xf3 0 ---- --- # Source: sense-hat.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::sense-hat/intro.adoc[] include::sense-hat/hardware.adoc[] include::sense-hat/software.adoc[] --- # Source: about.adoc *Note: This file could not be automatically converted from AsciiDoc.* == About .A 512 GB Raspberry Pi SSD Kit image::images/ssd-kit.png[width="80%"] The Raspberry Pi SSD Kit bundles a xref:../accessories/m2-hat-plus.adoc[Raspberry Pi M.2 HAT+] with a xref:../accessories/ssds.adoc[Raspberry Pi SSD]. The Raspberry Pi SSD Kit includes a 16 mm stacking header, spacers, and screws to enable fitting on Raspberry Pi 5 alongside a Raspberry Pi Active Cooler. == Install To install the Raspberry Pi SSD Kit, follow the xref:../accessories/m2-hat-plus.adoc#m2-hat-plus-installation[installation instructions for the Raspberry Pi M.2 HAT+]. --- # Source: ssd-kit.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::ssd-kit/about.adoc[] --- # Source: about.adoc *Note: This file could not be automatically converted from AsciiDoc.* == About .A 512 GB Raspberry Pi SSD image::images/ssd.png[width="80%"] SSD quality is a critical factor in determining the overall user experience for a Raspberry Pi. Raspberry Pi provides official SSDs that are tested to ensure compatibility with Raspberry Pi models and peripherals. Raspberry Pi SSDs are available in the following sizes: * 256 GB * 512 GB * 1 TB To use an SSD with your Raspberry Pi, you need a Raspberry Pi 5-compatible M.2 adapter, such as the xref:../accessories/m2-hat-plus.adoc[Raspberry Pi M.2 HAT+ or M.2 HAT+ Compact]. == Specifications Raspberry Pi SSDs are PCIe Gen 3-compliant. Raspberry Pi SSDs use the NVMe 1.4 register interface and command set. Raspberry Pi SSDs use the M.2 2230 form factor. The following table describes the read and write speeds of Raspberry Pi SSDs using 4 kB of random data: [cols="1,2,2"] |=== | Size | Read Speed | Write Speed | 256 GB | 40,000 IOPS | 70,000 IOPS | 512 GB | 50,000 IOPS | 90,000 IOPS | 1 TB | 90,000 IOPS | 90,000 IOPS |=== --- # Source: ssds.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::ssds/about.adoc[] --- # Source: about.adoc *Note: This file could not be automatically converted from AsciiDoc.* The https://www.raspberrypi.com/products/touch-display-2/[Raspberry Pi Touch Display 2] is a portrait orientation touchscreen LCD (with rotation options) designed for interactive projects like tablets, entertainment systems, and information dashboards. .The Raspberry Pi Touch Display 2 image::images/touch-display-2-hero.jpg[width="80%"] == Specifications This section describes the physical characteristics and capabilities of Touch Display 2, including dimensions, features, and hardware. === Dimensions The Touch Display 2 is available in two sizes: 5-inch and 7-inch (measured diagonally). Aside from the physical size, these two displays have identical features and functionality. The following table summarises the dimensions of these two displays: [cols="1,1,1,1,1"] |=== | |*Depth* |*Outline dimensions* |*Viewing area* |*Active area* |*5-inch display* |16 mm |143.5 x 91.5 mm |111.5 x 63 mm |110.5 x 62 mm |*7-inch display* |15 mm |189.5 x 120 mm |155.5 x 88 mm |154.5 x 87 mm |=== === Features Touch Display 2 (both 5-inch and 7-inch) includes the following features: * **720 x 1280 pixel resolution.** High-definition output. * **24-bit RGB display.** Capable of showing over 16 million colours. * **Multitouch.** Supports up to five simultaneous touch points. * **Mouse-equivalence.** Supports full desktop control without a physical mouse, for example, selecting, dragging, scrolling, and long-pressing for menus. * **On-screen keyboard.** Supports a visual keyboard in place of a physical keyboard. * **Integrated power.** Powered directly by the host Raspberry Pi, requiring no separate power supply. === Hardware The Touch Display 2 box contains the following parts: - A Touch Display 2 - Eight M2.5 screws - A 15-way to 15-way FFC - A 22-way to 15-way FFC for Raspberry Pi 5 - A GPIO power cable The following image shows these items from top to bottom, left to right. .Parts included in the Touch Display 2 box image::images/touch-display-2-whats-in-the-booooox.jpg["Parts included in the Touch Display 2 box", width="80%"] === Connectors The Touch Display 2 connects to a Raspberry Pi using: - A **DSI connector** for video and touch data. - The **GPIO header** for power. To make the DSI connection, use a **Flat Flexible Cable (FFC)** included with your display. The type of FFC you need depends on your Raspberry Pi model: - For **Raspberry Pi 5**, use the **22-way to 15-way FFC**. - For all other Raspberry Pi models, use the **15-way to 15-way FFC**. The Touch Display 2 is compatible with all models of Raspberry Pi from Raspberry Pi 1B+ onwards, except the Zero series and Keyboard series, which lack a DSI connector. == Connect to Raspberry Pi After determining the correct FFC for your Raspberry Pi model, you can connect your Touch Display 2 to your Raspberry Pi. After completing the following steps, you can reconnect your Raspberry Pi to power. It can take up to one minute for Raspberry Pi OS to start displaying output to the Touch Display 2 screen. .A Raspberry Pi 5 connected and mounted to the Touch Display 2 image::images/touch-display-2-installation-diagram.png["A Raspberry Pi 5 connected and mounted to the Touch Display 2", width="80%"] IMPORTANT: Disconnect your Raspberry Pi from power before completing the following steps. === Step 1. Connect FFC to Touch Display 2 . Slide the retaining clip outwards from both sides of the FFC connector on the Touch Display 2. . Insert one 15-way end of your FFC into the Touch Display 2 FFC connector, with the metal contacts facing upwards, away from the Touch Display 2. - If you're connecting to a Raspberry Pi 5, and therefore using the **22-way to 15-way FFC**, the 22-way end is the smaller end of the cable. Insert the larger end of the cable into the Touch Display 2 FFC connector. - If you're using the **15-way to 15-way FFC**, insert either end of the cable into the Touch Display 2 FFC connector. . Hold the FFC firmly in place and simultaneously push the retaining clip back in to the Touch Display 2 FFC connector from both sides. === Step 2. Connect FFC to Raspberry Pi . Slide the retaining clip upwards from both sides of the DSI connector of your Raspberry Pi. - This port should be marked with some variation of the term **DISPLAY**, **CAM/DISP**, or **DISP**. - If your Raspberry Pi has multiple DSI connectors, we recommend using the port labelled **1**. . Insert the other end of your FFC into the Raspberry Pi DSI connector, with the metal contacts facing the Ethernet and USB-A ports. . Hold the FFC firmly in place and simultaneously push the retaining clip back down on the FFC connector of the Raspberry Pi to secure the cable. === Step 3. Connect the GPIO power cable . Plug the smaller end of the GPIO power cable into the **J1** port on the Touch Display 2. . Connect the three-pin end of the GPIO power cable to your xref:../computers/raspberry-pi.adoc#gpio[Raspberry Pi's GPIO]. This connects the red cable (5 V power) to pin 2 and the black cable (ground) to pin 6. Viewed from above, with the Ethernet and USB-A ports facing down, these pins are located in the top-right corner of the board, with pin 2 in the top right-most position. .The GPIO connection to the Touch Display 2 image::images/touch-display-2-gpio-connection.png[The GPIO connection to the Touch Display 2, width="40%"] WARNING: Connecting the power cable incorrectly might cause damage to the display. === Step 4. Mount your Raspberry Pi to the Touch Display 2 (optional) Optionally, use the included M2.5 screws to mount your Raspberry Pi to the back of your Touch Display 2. . Align the four corner stand-offs of your Raspberry Pi with the four mounting points that surround the FFC connector and J1 port on the back of the Touch Display 2. . Insert the M2.5 screws (included) into the four corner stand-offs and tighten until your Raspberry Pi is secure. Take care not to pinch the FFC. == Use an on-screen keyboard Raspberry Pi OS **Bookworm** and later already includes the **Squeekboard on-screen keyboard**. With a Touch Display 2 attached, the keyboard automatically appears when you can enter text, and automatically disappears when you can't. For applications that don't support text entry detection, you can manually show or hide the keyboard using the keyboard icon at the right side of the taskbar. You can also permanently show or hide the on-screen keyboard using the Raspberry Pi graphical interface or the command line. - **Raspberry Pi desktop interface:** From the Raspberry Pi menu, go to **Preferences > Control Centre > Display** and choose your on-screen keyboard setting. - **Command line:** Open a terminal and enter `sudo raspi-config`. Navigate to the **Display** section of `raspi-config` and then choose your keyboard setting. == Change screen orientation You can change the orientation behaviour of the Touch Display 2, both with a desktop and without a desktop. This is useful if you want to physically rotate the screen or mount it in a landscape position. You have four rotation options: - **0** maintains the default display position, which is a portrait orientation. - **90** rotates the display 90 degrees to the right (clockwise), making it a landscape orientation. - **180** rotates the display 180 degrees to the right (clockwise), which flips the display upside down. - **270** rotates the display 270 degrees to the right (clockwise), which is the same as rotating the display 90 degrees to the left (counter-clockwise), making it a landscape orientation. === With a desktop If you have the Raspberry Pi OS desktop running, you can rotate the display through the **Screen Configuration** tool: . Go to **Preferences > Screen Configuration**. This opens the layout editor where you can see your connected displays. . Right-click the rectangle in the layout editor that represents your Touch Display 2 (likely labelled `DSI-1`). . Select **Orientation**. . Choose a rotation: *0°*, *90°*, *180°*, or *270°*. This rotates the display by the specified number of degrees to the right. === Without a desktop To rotate the display without a desktop, edit the `/boot/firmware/cmdline.txt` file, which contains parameters that Raspberry Pi OS reads when it boots. Add the following to the end of `cmdline.txt`, replacing `` with the number of degrees to rotate by (`0`, `90`, `180`, or `270`): [source,ini] ---- video=DSI-1:720x1280@60,rotate= ---- This `rotate=` setting only rotates the text-mode console; any applications that write directly to DRM (such as `cvlc` or the libcamera apps) won't be rotated, and will instead need to use their own rotation options (if available). NOTE: You can't rotate the DSI display separately from the HDMI display with `cmdline.txt`. When you use DSI and HDMI simultaneously, they share the same rotation value. == Customise touchscreen settings You can use the Device Tree overlay to tell Raspberry Pi OS how to configure the Touch Display 2 at boot. - For the 5-inch display, the overlay is called `vc4-kms-dsi-ili9881-5inch`. - For the 7-inch display, the overlay is called `vc4-kms-dsi-ili9881-7inch`. You can modify the Device Tree overlay in the boot configuration file (`/boot/firmware/config.txt`). Open `/boot/firmware/config.txt` and then add the required Device Tree parameters to the `dtoverlay` line, separated by commas. - Booleans (`invx`, `invy`, `swapxy`, and `disable_touch`) default to true if present, but you can set them to false using the suffix `=0`. - Integers (`sizex` and `sizey`) require a number, for example, `sizey=240`. See the table below for details. === Device Tree options |=== | Parameter | Action | `sizex` | Sets the touch horizontal resolution (default 720) | `sizey` | Sets the touch vertical resolution (default 1280) | `invx` | Inverts the touch X-axis (left/right) | `invy` | Inverts the touch Y-axis (up/down) | `swapxy` | Swaps the touch X and Y axes (rotate 90° logically) | `disable_touch` | Disables the touchscreen functionality |=== === Example In the following example, `invx` flips the X axis and `invy` flips the Y axis for a 7-inch Touch Display 2: [source,ini] ---- dtoverlay=vc4-kms-dsi-ili9881-7inch,invx,invy ---- == Connect to a Compute Module Unlike Raspberry Pi single board computers (SBC), which automatically detect the official Raspberry Pi Touch displays, Raspberry Pi Compute Modules don't automatically detect connected devices; you must tell it what display is attached. This is because the connections between the SoC and DSI connectors on a Raspberry Pi are fixed and the system knows what hardware is connected; auto-detection ensures that the correct Device Tree settings are passed to the Linux kernel, so the display works without additional configuration. Compute Modules, intended for industrial and custom applications, expose all GPIOs and interfaces. This provides greater flexibility for connecting hardware, but means that a Compute Module can't automatically detect devices like the Touch Display 2. This means that, for Compute Modules, the Device Tree fragments, which tell the kernel how to interact with the display, must be manually specified. You can do this in three ways: - By adding an overlay entry in `config.txt`. This is the simplest option. For configuration instructions, see the xref:../computers/compute-module.adoc#attaching-the-touch-display-2-lcd-panel[Compute Module hardware documentation]. - Using a custom base device tree file. This is an advanced method not covered in this online documentation. - Using a HAT EEPROM (if present). --- # Source: touch-display-2.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::touch-display-2/about.adoc[] --- # Source: about-tv-hat.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[tv-hat]] == About .The Raspberry Pi TV HAT image::images/tv-hat.jpg[width="80%"] The Raspberry Pi TV HAT allows you to receive digital terrestrial TV broadcast systems, using an onboard DVB-T and DVB-T2 tuner, on a Raspberry Pi. With the board you can receive and view TV on a Raspberry Pi, or create a TV server that allows you to stream received TV over a network to other devices. The TV HAT can be used with any 40-pin Raspberry Pi board as a server for other devices on the network. Performance when receiving and viewing TV on the Pi itself can vary, and we recommend using a Raspberry Pi 2 or later for this purpose Key features: * Sony CXD2880 TV tuner * Supported TV standards: DVB-T2, DVB-T * Reception frequency: VHF III, UHF IV, UHF V * Channel bandwidth: ** DVB-T2: 1.7 MHz, 5 MHz, 6 MHz, 7 MHz, 8 MHz ** DVB-T: 5 MHz, 6 MHz, 7 MHz, 8 MHz == About DVB-T WARNING: The TV HAT does not support ATSC, the digital TV standard used in North America. Digital Video Broadcasting – Terrestrial (DVB-T) is the DVB European-based consortium standard for the broadcast transmission of digital terrestrial television. There are other digital TV standards used elsewhere in the world, e.g. ATSC which is used in North America. However the TV HAT only supports the DVB-T and DVB-T2 standards. .DTT system implemented or adopted (Source: DVB/EBU/BNE DTT Deployment Database, March 2023) image::images/dvbt-map.png[width="80%"] [[tv-hat-installation]] == Install Follow our xref:../computers/getting-started.adoc[getting started] documentation and set up the Raspberry Pi with the newest version of Raspberry Pi OS. Connect the aerial adaptor to the TV HAT and with the adaptor pointing away from the USB ports, press the HAT gently down over the Raspberry Pi's GPIO pins, and place the spacers at two or three of the corners of the HAT, and tighten the screws through the mounting holes to hold them in place. Then connect the TV HAT's aerial adaptor to the cable from your TV aerial. The software we recommend to decode the streams (known as multiplexes, or muxes for short) and view content is called TVHeadend. The TV HAT can decode one mux at a time, and each mux can contain several channels to choose from. Content can either be viewed on the Raspberry Pi to which the TV-HAT is connected, or sent to another device on the same network. Boot your Raspberry Pi and then go ahead open a terminal window, and run the following two commands to install the `tvheadend` software: [source,console] ---- $ sudo apt update $ sudo apt install tvheadend ---- During the `tvheadend` installation, you will be asked to choose an administrator account name and password. You'll need these later, so make sure to pick something you can remember. On another computer on your network, open up a web browser and type the following into the address bar: `http://raspberrypi.local:9981/extjs.html` This should connect to `tvheadend` running on the Raspberry Pi. Once you have connected to `tvheadend` via the browser, you will be prompted to sign in using the account name and password you chose when you installed `tvheadend` on the Raspberry Pi. A setup wizard should appear. You will be first ask to set the language you want `tvheadend` to use, and then to set up network, user, and administrator access. If you don't have specific preferences, leave *Allowed network* blank, and enter an asterisk (*) in the *username* and *password* fields. This will let anyone connected to your local network access `tvheadend`. You should see a window titled *Network settings*. Under *Network 2*, you should see `Tuner: Sony CDX2880 #0 : DVB-T #0`. For *Network type*, choose `DVB-T Network`. The next window is *Assign predefined muxes to networks*; here, you select the TV stream to receive and decode. Under Network 1, for predefined muxes, select your local TV transmitter. NOTE: Your local transmitter can be found using the https://www.freeview.co.uk/help[Freeview website]. Enter your postcode to see which transmitter should give you a good signal. When you click *Save & Next*, the software will start scanning for the selected mux, and will show a progress bar. After about two minutes, you should see something like: [source,console] ---- Found muxes: 8 Found services: 172 ---- In the next window, titled *Service mapping*, tick all three boxes: *Map all services*, *Create provider tags*, and *Create network tags*. You should see a list of TV channels you can watch, along with the programmes they're currently showing. To watch a TV channel in the browser, click the little TV icon to the left of the channel listing, just to the right of the *i* icon. This brings up an in-browser media player. Depending on the decoding facilities built into your browser and the type of stream being played, you may find that playback can be jerky. In these cases, we recommend using a local media player as the playback application. To watch a TV channel in a local media player, e.g. https://www.videolan.org/vlc[VLC], you'll need to download it as a stream. Click the `i` icon to the left of a channel listing to bring up the information panel for that channel. Here you can see a stream file that you can download. NOTE: `tvheadend` is supported by numerous apps, such as TvhClient for iOS, which will play TV from the Raspberry Pi. == Mechanical Drawing .The Raspberry Pi TV HAT image::images/mechanical.png[] --- # Source: tv-hat.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::tv-hat/about-tv-hat.adoc[] --- # Source: about.adoc *Note: This file could not be automatically converted from AsciiDoc.* == About The https://www.raspberrypi.com/products/usb-3-hub/[Raspberry Pi USB 3 Hub] provides extra connectivity for your devices, extending one USB-A port into four. An optional external USB-C power input supports high-power peripherals. You can use the USB 3 Hub to power low-power peripherals, such as most mice and keyboards, using no external power. .The Raspberry Pi USB 3.0 Hub image::images/usb-3-hub-hero.png[width="80%"] == Specification * 1× upstream USB 3.0 Type-A male connector on 8 cm captive cable * 4× downstream USB 3.0 Type-A ports * Data transfer speeds up to 5Gbps * Power transfer up to 900 mA (4.5 W); optional external USB-C power input provides up to 5V @ 3A for high-power downstream peripherals * Compatible with USB 3.0 and USB 2.0 Type-A host ports .Physical specification image::images/usb-3-hub-physical-specification.png[] --- # Source: usb-3-hub.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::usb-3-hub/about.adoc[] --- # Source: getting-started.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Getting Started This guide will help you set up a Hailo NPU with your Raspberry Pi 5. This will enable you to run `rpicam-apps` camera demos using an AI neural network accelerator. === Prerequisites For this guide, you will need the following: * a Raspberry Pi 5 * one of the following NPUs: ** a xref:../accessories/ai-kit.adoc[Raspberry Pi AI Kit], which includes: *** an M.2 HAT+ *** a pre-installed Hailo-8L AI module ** a xref:../accessories/ai-hat-plus.adoc[Raspberry Pi AI HAT+] * a 64-bit Raspberry Pi OS _Bookworm_ install * any official Raspberry Pi camera (e.g. Camera Module 3 or High Quality Camera) === Hardware setup . Attach the camera to your Raspberry Pi 5 board following the instructions at xref:../accessories/camera.adoc#install-a-raspberry-pi-camera[Install a Raspberry Pi Camera]. You can skip reconnecting your Raspberry Pi to power, because you'll need to disconnect your Raspberry Pi from power for the next step. . Depending on your NPU, follow the installation instructions for the xref:../accessories/ai-kit.adoc#ai-kit-installation[AI Kit] or xref:../accessories/ai-hat-plus.adoc#ai-hat-plus-installation[AI HAT+], to get your hardware connected to your Raspberry Pi 5. . Follow the instructions to xref:raspberry-pi.adoc#pcie-gen-3-0[enable PCIe Gen 3.0]. This step is optional, but _highly recommended_ to achieve the best performance with your NPU. . Install the dependencies required to use the NPU. Run the following command from a terminal window: + [source,console] ---- $ sudo apt install dkms $ sudo apt install hailo-all ---- + This installs the following dependencies: + * Hailo kernel device driver and firmware * HailoRT middleware software * Hailo Tappas core post-processing libraries * The `rpicam-apps` Hailo post-processing software demo stages . Finally, reboot your Raspberry Pi with `sudo reboot` for these settings to take effect. . To ensure everything is running correctly, run the following command: + [source,console] ---- $ hailortcli fw-control identify ---- + If you see output similar to the following, you've successfully installed the NPU and its software dependencies: + ---- Executing on device: 0000:01:00.0 Identifying board Control Protocol Version: 2 Firmware Version: 4.17.0 (release,app,extended context switch buffer) Logger Version: 0 Board Name: Hailo-8 Device Architecture: HAILO8L Serial Number: HLDDLBB234500054 Part Number: HM21LB1C2LAE Product Name: HAILO-8L AI ACC M.2 B+M KEY MODULE EXT TMP ---- + NOTE: AI HAT+ devices may show `` for `Serial Number`, `Part Number` and `Product Name`. This is expected, and does not impact functionality. + Additionally, you can run `dmesg | grep -i hailo` to check the kernel logs, which should yield output similar to the following: + ---- [ 3.049657] hailo: Init module. driver version 4.17.0 [ 3.051983] hailo 0000:01:00.0: Probing on: 1e60:2864... [ 3.051989] hailo 0000:01:00.0: Probing: Allocate memory for device extension, 11600 [ 3.052006] hailo 0000:01:00.0: enabling device (0000 -> 0002) [ 3.052011] hailo 0000:01:00.0: Probing: Device enabled [ 3.052028] hailo 0000:01:00.0: Probing: mapped bar 0 - 000000000d8baaf1 16384 [ 3.052034] hailo 0000:01:00.0: Probing: mapped bar 2 - 000000009eeaa33c 4096 [ 3.052039] hailo 0000:01:00.0: Probing: mapped bar 4 - 00000000b9b3d17d 16384 [ 3.052044] hailo 0000:01:00.0: Probing: Force setting max_desc_page_size to 4096 (recommended value is 16384) [ 3.052052] hailo 0000:01:00.0: Probing: Enabled 64 bit dma [ 3.052055] hailo 0000:01:00.0: Probing: Using userspace allocated vdma buffers [ 3.052059] hailo 0000:01:00.0: Disabling ASPM L0s [ 3.052070] hailo 0000:01:00.0: Successfully disabled ASPM L0s [ 3.221043] hailo 0000:01:00.0: Firmware was loaded successfully [ 3.231845] hailo 0000:01:00.0: Probing: Added board 1e60-2864, /dev/hailo0 ---- . To ensure the camera is operating correctly, run the following command: + [source,console] ---- $ rpicam-hello -t 10s ---- + This starts the camera and shows a preview window for ten seconds. Once you have verified everything is installed correctly, it's time to run some demos. === Demos The `rpicam-apps` suite of camera applications implements a xref:camera_software.adoc#post-processing-with-rpicam-apps[post-processing framework]. This section contains a few demo post-processing stages that highlight some of the capabilities of the NPU. The following demos use xref:camera_software.adoc#rpicam-hello[`rpicam-hello`], which by default displays a preview window. However, you can use other `rpicam-apps` instead, including xref:camera_software.adoc#rpicam-vid[`rpicam-vid`] and xref:camera_software.adoc#rpicam-still[`rpicam-still`]. You may need to add or modify some command line options to make the demo commands compatible with alternative applications. To begin, run the following command to install the latest `rpicam-apps` software package: [source,console] ---- $ sudo apt update && sudo apt install rpicam-apps ---- ==== Object Detection This demo displays bounding boxes around objects detected by a neural network. To disable the viewfinder, use the xref:camera_software.adoc#nopreview[`-n`] flag. To return purely textual output describing the objects detected, add the `-v 2` option. Run the following command to try the demo on your Raspberry Pi: [source,console] ---- $ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov6_inference.json ---- Alternatively, you can try another model with different trade-offs in performance and efficiency. To run the demo with the Yolov8 model, run the following command: [source,console] ---- $ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov8_inference.json ---- To run the demo with the YoloX model, run the following command: [source,console] ---- $ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolox_inference.json ---- To run the demo with the Yolov5 Person and Face model, run the following command: [source,console] ---- $ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov5_personface.json ---- ==== Image Segmentation This demo performs object detection and segments the object by drawing a colour mask on the viewfinder image. Run the following command to try the demo on your Raspberry Pi: [source,console] ---- $ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov5_segmentation.json --framerate 20 ---- ==== Pose Estimation This demo performs 17-point human pose estimation, drawing lines connecting the detected points. Run the following command to try the demo on your Raspberry Pi: [source,console] ---- $ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov8_pose.json ---- === Alternative Package Versions The AI Kit and AI HAT+ do not function if there is a version mismatch between the Hailo software packages and device drivers. In addition, Hailo's neural network tooling may require a particular version for generated model files. If you require a specific version, complete the following steps to install the proper versions of all of the dependencies: . If you have previously used `apt-mark` to hold any of the relevant packages, you may need to unhold them: + [source,console] ---- $ sudo apt-mark unhold hailo-tappas-core hailort hailo-dkms ---- . Install the required version of the software packages: [tabs] ====== v4.19:: To install version 4.19 of Hailo's neural network tooling, run the following commands: + [source,console] ---- sudo apt install hailo-tappas-core=3.30.0-1 hailort=4.19.0-3 hailo-dkms=4.19.0-1 python3-hailort=4.19.0-2 ---- + [source,console] ---- $ sudo apt-mark hold hailo-tappas-core hailort hailo-dkms python3-hailort ---- 4.18:: To install version 4.18 of Hailo's neural network tooling, run the following commands: + [source,console] ---- $ sudo apt install hailo-tappas-core=3.29.1 hailort=4.18.0 hailo-dkms=4.18.0-2 ---- + [source,console] ---- $ sudo apt-mark hold hailo-tappas-core hailort hailo-dkms ---- 4.17:: To install version 4.17 of Hailo's neural network tooling, run the following commands: + [source,console] ---- $ sudo apt install hailo-tappas-core=3.28.2 hailort=4.17.0 hailo-dkms=4.17.0-1 ---- + [source,console] ---- $ sudo apt-mark hold hailo-tappas-core hailort hailo-dkms ---- ====== === Further Resources Hailo has also created a set of demos that you can run on a Raspberry Pi 5, available in the https://github.com/hailo-ai/hailo-rpi5-examples[hailo-ai/hailo-rpi5-examples GitHub repository]. You can find Hailo's extensive model zoo, which contains a large number of neural networks, in the https://github.com/hailo-ai/hailo_model_zoo/tree/master/docs/public_models/HAILO8L[hailo-ai/hailo_model_zoo GitHub repository]. Check out the https://community.hailo.ai/[Hailo community forums and developer zone] for further discussions on the Hailo hardware and tooling. --- # Source: ai.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::ai/getting-started.adoc[] --- # Source: camera_usage.adoc *Note: This file could not be automatically converted from AsciiDoc.* This documentation describes how to use supported camera modules with our software tools. All Raspberry Pi cameras can record high-resolution photographs and full HD 1080p video (or better) with our software tools. Raspberry Pi produces several official camera modules, including: * the original 5-megapixel Camera Module 1 (discontinued) * the 8-megapixel https://www.raspberrypi.com/products/camera-module-v2/[Camera Module 2], with or without an infrared filter * the 12-megapixel https://raspberrypi.com/products/camera-module-3/[Camera Module 3], with both standard and wide lenses, with or without an infrared filter * the 12-megapixel https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/[High Quality Camera] with CS and M12 mount variants for use with external lenses * the 1.6-megapixel https://www.raspberrypi.com/products/raspberry-pi-global-shutter-camera/[Global Shutter Camera] for fast motion photography * the 12-megapixel https://www.raspberrypi.com/products/ai-camera/[AI Camera] uses the Sony IMX500 imaging sensor to provide low-latency, high-performance AI capabilities to any camera application For more information about camera hardware, see the xref:../accessories/camera.adoc#about-the-camera-modules[camera hardware documentation]. First, xref:../accessories/camera.adoc#install-a-raspberry-pi-camera[install your camera module]. Then, follow the guides in this section to put your camera module to use. [WARNING] ==== This guide no longer covers the _legacy camera stack_ which was available in _Bullseye_ and earlier Raspberry Pi OS releases. The legacy camera stack, using applications like `raspivid`, `raspistill` and the original `Picamera` (_not_ `Picamera2`) Python library, has been deprecated for many years, and is now unsupported. If you are using the legacy camera stack, it will only have support for the Camera Module 1, Camera Module 2 and the High Quality Camera, and will never support any newer camera modules. Nothing in this document is applicable to the legacy camera stack. ==== --- # Source: csi-2-usage.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Unicam Raspberry Pi SoCs all have two camera interfaces that support either CSI-2 D-PHY 1.1 or Compact Camera Port 2 (CCP2) sources. This interface is known by the codename Unicam. The first instance of Unicam supports two CSI-2 data lanes, while the second supports four. Each lane can run at up to 1Gbit/s (DDR, so the max link frequency is 500 MHz). Compute Modules and Raspberry Pi 5 route out all lanes from both peripherals. Other models prior to Raspberry Pi 5 only expose the second instance, routing out only two of the data lanes to the camera connector. === Software interfaces The V4L2 software interface is the only means of communicating with the Unicam peripheral. There used to also be firmware and MMAL rawcam component interfaces, but these are no longer supported. ==== V4L2 NOTE: The V4L2 interface for Unicam is available only when using `libcamera`. There is a fully open-source kernel driver available for the Unicam block; this kernel module, called `bcm2835-unicam`, interfaces with V4L2 subdevice drivers to deliver raw frames. This `bcm2835-unicam` driver controls the sensor and configures the Camera Serial Interface 2 (CSI-2) receiver. Peripherals write raw frames (after Debayer) to SDRAM for V4L2 to deliver to applications. There is no image processing between the camera sensor capturing the image and the `bcm2835-unicam` driver placing the image data in SDRAM except for Bayer unpacking to 16bits/pixel. ---- |------------------------| | bcm2835-unicam | |------------------------| ^ | | |-------------| img | | Subdevice | | |-------------| v -SW/HW- | |---------| |-----------| | Unicam | | I2C or SPI| |---------| |-----------| csi2/ ^ | ccp2 | | |-----------------| | sensor | |-----------------| ---- Mainline Linux contains a range of existing drivers. The Raspberry Pi kernel tree has some additional drivers and Device Tree overlays to configure them: |=== | Device | Type | Notes | Omnivision OV5647 | 5 MP Camera | Original Raspberry Pi Camera | Sony IMX219 | 8 MP Camera | Revision 2 Raspberry Pi camera | Sony IMX477 | 12 MP Camera | Raspberry Pi HQ camera | Sony IMX708 | 12 MP Camera | Raspberry Pi Camera Module 3 | Sony IMX296 | 1.6 MP Camera | Raspberry Pi Global Shutter Camera Module | Toshiba TC358743 | HDMI to CSI-2 bridge | | Analog Devices ADV728x-M | Analog video to CSI-2 bridge | No interlaced support | Infineon IRS1125 | Time-of-flight depth sensor | Supported by a third party |=== As the subdevice driver is also a kernel driver with a standardised API, third parties are free to write their own for any source of their choosing. === Write a third-party driver This is the recommended approach to interfacing via Unicam. When developing a driver for a new device intended to be used with the `bcm2835-unicam` module, you need the driver and corresponding device tree overlays. Ideally, the driver should be submitted to the http://vger.kernel.org/vger-lists.html#linux-media[linux-media] mailing list for code review and merging into mainline, then moved to the https://github.com/raspberrypi/linux[Raspberry Pi kernel tree]; but exceptions may be made for the driver to be reviewed and merged directly to the Raspberry Pi kernel. NOTE: All kernel drivers are licensed under the GPLv2 licence, therefore source code must be available. Shipping of binary modules only is a violation of the GPLv2 licence under which the Linux kernel is licensed. The `bcm2835-unicam` module has been written to try and accommodate all types of CSI-2 source driver that are currently found in the mainline Linux kernel. These can be split broadly into camera sensors and bridge chips. Bridge chips allow for conversion between some other format and CSI-2. ==== Camera sensors The sensor driver for a camera sensor is responsible for all configuration of the device, usually via I2C or SPI. Rather than writing a driver from scratch, it is often easier to take an existing driver as a basis and modify it as appropriate. The https://github.com/raspberrypi/linux/blob/rpi-6.1.y/drivers/media/i2c/imx219.c[IMX219 driver] is a good starting point. This driver supports both 8bit and 10bit Bayer readout, so enumerating frame formats and frame sizes is slightly more involved. Sensors generally support https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/control.html[V4L2 user controls]. Not all these controls need to be implemented in a driver. The IMX219 driver only implements a small subset, listed below, the implementation of which is handled by the `imx219_set_ctrl` function. * `V4L2_CID_PIXEL_RATE` / `V4L2_CID_VBLANK` / `V4L2_CID_HBLANK`: allows the application to set the frame rate * `V4L2_CID_EXPOSURE`: sets the exposure time in lines; the application needs to use `V4L2_CID_PIXEL_RATE`, `V4L2_CID_HBLANK`, and the frame width to compute the line time * `V4L2_CID_ANALOGUE_GAIN`: analogue gain in sensor specific units * `V4L2_CID_DIGITAL_GAIN`: optional digital gain in sensor specific units * `V4L2_CID_HFLIP / V4L2_CID_VFLIP`: flips the image either horizontally or vertically; this operation may change the Bayer order of the data in the frame, as is the case on the IMX219. * `V4L2_CID_TEST_PATTERN` / `V4L2_CID_TEST_PATTERN_*`: enables output of various test patterns from the sensor; useful for debugging In the case of the IMX219, many of these controls map directly onto register writes to the sensor itself. Further guidance can be found in the `libcamera` https://git.linuxtv.org/libcamera.git/tree/Documentation/sensor_driver_requirements.rst[sensor driver requirements], and in chapter 3 of the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide]. ===== Device Tree Device Tree is used to select the sensor driver and configure parameters such as number of CSI-2 lanes, continuous clock lane operation, and link frequency (often only one is supported). The IMX219 https://github.com/raspberrypi/linux/blob/rpi-6.1.y/arch/arm/boot/dts/overlays/imx219-overlay.dts[Device Tree overlay] for the 6.1 kernel is available on GitHub. ==== Bridge chips These are devices that convert an incoming video stream, for example HDMI or composite, into a CSI-2 stream that can be accepted by the Raspberry Pi CSI-2 receiver. Handling bridge chips is more complicated. Unlike camera sensors, they have to respond to the incoming signal and report that to the application. The mechanisms for handling bridge chips can be split into two categories: either analogue or digital. When using `ioctls` in the sections below, an `_S_` in the `ioctl` name means it is a set function, while `_G_` is a get function and `_ENUM_` enumerates a set of permitted values. ===== Analogue video sources Analogue video sources use the standard `ioctls` for detecting and setting video standards. https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-std.html[`VIDIOC_G_STD`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-std.html[`VIDIOC_S_STD`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-enumstd.html[`VIDIOC_ENUMSTD`], and https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-querystd.html[`VIDIOC_QUERYSTD`] are available. Selecting the wrong standard will generally result in corrupt images. Setting the standard will typically also set the resolution on the V4L2 CAPTURE queue. It can not be set via `VIDIOC_S_FMT`. Generally, requesting the detected standard via `VIDIOC_QUERYSTD` and then setting it with `VIDIOC_S_STD` before streaming is a good idea. ===== Digital video sources For digital video sources, such as HDMI, there is an alternate set of calls that allow specifying of all the digital timing parameters: https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-dv-timings.html[`VIDIOC_G_DV_TIMINGS`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-dv-timings.html[`VIDIOC_S_DV_TIMINGS`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-enum-dv-timings.html[`VIDIOC_ENUM_DV_TIMINGS`], and https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-query-dv-timings.html[`VIDIOC_QUERY_DV_TIMINGS`]. As with analogue bridges, the timings typically fix the V4L2 CAPTURE queue resolution, and calling `VIDIOC_S_DV_TIMINGS` with the result of `VIDIOC_QUERY_DV_TIMINGS` before streaming should ensure the format is correct. Depending on the bridge chip and the driver, it may be possible for changes in the input source to be reported to the application via `VIDIOC_SUBSCRIBE_EVENT` and `V4L2_EVENT_SOURCE_CHANGE`. ===== Currently supported devices There are two bridge chips which are currently supported by the Raspberry Pi Linux kernel: the Analog Devices ADV728x-M for analogue video sources, and the Toshiba TC358743 for HDMI sources. Analog Devices ADV728x(A)-M analogue video to CSI2 bridge chips convert composite S-video (Y/C), or component (YPrPb) video into a single lane CSI-2 interface, and are supported by the https://github.com/raspberrypi/linux/blob/rpi-6.1.y/drivers/media/i2c/adv7180.c[ADV7180 kernel driver]. Product details for the various versions of this chip can be found on the Analog Devices website: https://www.analog.com/en/products/adv7280a.html[ADV7280A], https://www.analog.com/en/products/adv7281a.html[ADV7281A], and https://www.analog.com/en/products/adv7282a.html[ADV7282A]. Because of some missing code in the current core V4L2 implementation, selecting the source fails, so the Raspberry Pi kernel version adds a kernel module parameter called `dbg_input` to the ADV7180 kernel driver which sets the input source every time VIDIOC_S_STD is called. At some point mainstream will fix the underlying issue (a disjoin between the kernel API call s_routing, and the userspace call `VIDIOC_S_INPUT`) and this modification will be removed. Receiving interlaced video is not supported, therefore the ADV7281(A)-M version of the chip is of limited use as it doesn't have the necessary I2P deinterlacing block. Also ensure when selecting a device to specify the -M option. Without that you will get a parallel output bus which can not be interfaced to the Raspberry Pi. There are no known commercially available boards using these chips, but this driver has been tested via the Analog Devices https://www.analog.com/en/design-center/evaluation-hardware-and-software/evaluation-boards-kits/EVAL-ADV7282A-M.html[EVAL-ADV7282-M evaluation board]. This driver can be loaded using the `config.txt` dtoverlay `adv7282m` if you are using the `ADV7282-M` chip variant; or `adv728x-m` with a parameter of either `adv7280m=1`, `adv7281m=1`, or `adv7281ma=1` if you are using a different variant. ---- dtoverlay=adv728x-m,adv7280m=1 ---- The Toshiba TC358743 is an HDMI to CSI-2 bridge chip, capable of converting video data at up to 1080p60. Information on this bridge chip can be found on the https://toshiba.semicon-storage.com/ap-en/semiconductor/product/interface-bridge-ics-for-mobile-peripheral-devices/hdmir-interface-bridge-ics/detail.TC358743XBG.html[Toshiba website]. The TC358743 interfaces HDMI into CSI-2 and I2S outputs. It is supported by the https://github.com/raspberrypi/linux/blob/rpi-6.1.y/drivers/media/i2c/tc358743.c[TC358743 kernel module]. The chip supports incoming HDMI signals as either RGB888, YUV444, or YUV422, at up to 1080p60. It can forward RGB888, or convert it to YUV444 or YUV422, and convert either way between YUV444 and YUV422. Only RGB888 and YUV422 support has been tested. When using two CSI-2 lanes, the maximum rates that can be supported are 1080p30 as RGB888, or 1080p50 as YUV422. When using four lanes on a Compute Module, 1080p60 can be received in either format. HDMI negotiates the resolution by a receiving device advertising an https://en.wikipedia.org/wiki/Extended_Display_Identification_Data[EDID] of all the modes that it can support. The kernel driver has no knowledge of the resolutions, frame rates, or formats that you wish to receive, so it is up to the user to provide a suitable file via the VIDIOC_S_EDID ioctl, or more easily using `v4l2-ctl --fix-edid-checksums --set-edid=file=filename.txt` (adding the --fix-edid-checksums option means that you don't have to get the checksum values correct in the source file). Generating the required EDID file (a textual hexdump of a binary EDID file) is not too onerous, and there are tools available to generate them, but it is beyond the scope of this page. As described above, use the `DV_TIMINGS` ioctls to configure the driver to match the incoming video. The easiest approach for this is to use the command `v4l2-ctl --set-dv-bt-timings query`. The driver does support generating the `SOURCE_CHANGED` events, should you wish to write an application to handle a changing source. Changing the output pixel format is achieved by setting it via `VIDIOC_S_FMT`, but only the pixel format field will be updated as the resolution is configured by the DV timings. There are a couple of commercially available boards that connect this chip to the Raspberry Pi. The Auvidea B101 and B102 are the most widely obtainable, but other equivalent boards are available. This driver is loaded using the `config.txt` dtoverlay `tc358743`. The chip also supports capturing stereo HDMI audio via I2S. The Auvidea boards break the relevant signals out onto a header, which can be connected to the Raspberry Pi's 40-pin header. The required wiring is: [cols=",^,^,^"] |=== | Signal | B101 header | 40-pin header | BCM GPIO | LRCK/WFS | 7 | 35 | 19 | BCK/SCK | 6 | 12 | 18 | DATA/SD | 5 | 38 | 20 | GND | 8 | 39 | N/A |=== The `tc358743-audio` overlay is required _in addition to_ the `tc358743` overlay. This should create an ALSA recording device for the HDMI audio. There is no resampling of the audio. The presence of audio is reflected in the V4L2 control `TC358743_CID_AUDIO_PRESENT` (audio-present), and the sample rate of the incoming audio is reflected in the V4L2 control `TC358743_CID_AUDIO_SAMPLING_RATE` (audio sampling-frequency). Recording when no audio is present or at a sample rate different from that reported emits a warning. --- # Source: libcamera_differences.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Differences between `rpicam` and `raspicam` The `rpicam-apps` emulate most features of the legacy `raspicam` applications. However, users may notice the following differences: * Boost `program_options` don't allow multi-character short versions of options, so where these were present they have had to be dropped. The long form options are named the same way, and any single-character short forms are preserved. * `rpicam-still` and `rpicam-jpeg` do not show the captured image in the preview window. * `rpicam-apps` removed the following `raspicam` features: + ** opacity (`--opacity`) ** image effects (`--imxfx`) ** colour effects (`--colfx`) ** annotation (`--annotate`, `--annotateex`) ** dynamic range compression, or DRC (`--drc`) ** stereo (`--stereo`, `--decimate` and `--3dswap`) ** image stabilisation (`--vstab`) ** demo modes (`--demo`) + xref:camera_software.adoc#post-processing-with-rpicam-apps[Post-processing] replaced many of these features. * `rpicam-apps` removed xref:camera_software.adoc#rotation[`rotation`] option support for 90° and 270° rotations. * `raspicam` conflated metering and exposure; `rpicam-apps` separates these options. * To disable Auto White Balance (AWB) in `rpicam-apps`, set a pair of colour gains with xref:camera_software.adoc#awbgains[`awbgains`] (e.g. `1.0,1.0`). * `rpicam-apps` cannot set Auto White Balance (AWB) into greyworld mode for NoIR camera modules. Instead, pass the xref:camera_software.adoc#tuning-file[`tuning-file`] option a NoIR-specific tuning file like `imx219_noir.json`. * `rpicam-apps` does not provide explicit control of digital gain. Instead, the xref:camera_software.adoc#gain[`gain`] option sets it implicitly. * `rpicam-apps` removed the `--ISO` option. Instead, calculate the gain corresponding to the ISO value required. Vendors can provide mappings of gain to ISO. * `rpicam-apps` does not support setting a flicker period. * `rpicam-still` does not support burst capture. Instead, consider using `rpicam-vid` in MJPEG mode with `--segment 1` to force each frame into a separate file. * `rpicam-apps` uses open source drivers for all image sensors, so the mechanism for enabling or disabling on-sensor Defective Pixel Correction (DPC) is different. The imx477 driver on the Raspberry Pi HQ Camera enables on-sensor DPC by default. To disable on-sensor DPC on the HQ Camera, run the following command: + [source,console] ---- $ sudo echo 0 > /sys/module/imx477/parameters/dpc_enable ---- --- # Source: libcamera_python.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[picamera2]] === Use `libcamera` from Python with Picamera2 The https://github.com/raspberrypi/picamera2[Picamera2 library] is a `rpicam`-based replacement for Picamera, which was a Python interface to Raspberry Pi's legacy camera stack. Picamera2 presents an easy-to-use Python API. Documentation about Picamera2 is available https://github.com/raspberrypi/picamera2[on GitHub] and in the https://datasheets.raspberrypi.com/camera/picamera2-manual.pdf[Picamera2 manual]. ==== Installation Recent Raspberry Pi OS images include Picamera2 with all the GUI (Qt and OpenGL) dependencies. Recent Raspberry Pi OS Lite images include Picamera2 without the GUI dependencies, although preview images can still be displayed using DRM/KMS. If your image did not include Picamera2, run the following command to install Picamera2 with all of the GUI dependencies: [source,console] ---- $ sudo apt install -y python3-picamera2 ---- If you don't want the GUI dependencies, you can run the following command to install Picamera2 without the GUI dependencies: [source,console] ---- $ sudo apt install -y python3-picamera2 --no-install-recommends ---- NOTE: If you previously installed Picamera2 with `pip`, uninstall it with: `pip3 uninstall picamera2`. --- # Source: qt.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Use `libcamera` with Qt Qt is a popular application framework and GUI toolkit. `rpicam-apps` includes an option to use Qt for a camera preview window. Unfortunately, Qt defines certain symbols (such as `slot` and `emit`) as macros in the global namespace. This causes errors when including `libcamera` files. The problem is common to all platforms that use both Qt and `libcamera`. Try the following workarounds to avoid these errors: * List `libcamera` include files, or files that include `libcamera` files (such as `rpicam-apps` files), _before_ any Qt header files whenever possible. * If you do need to mix your Qt application files with `libcamera` includes, replace `signals:` with `Q_SIGNALS:`, `slots:` with `Q_SLOTS:`, `emit` with `Q_EMIT` and `foreach` with `Q_FOREACH`. * Add the following at the top of any `libcamera` include files: + [source,cpp] ---- #undef signals #undef slots #undef emit #undef foreach ---- * If your project uses `qmake`, add `CONFIG += no_keywords` to the project file. * If your project uses `cmake`, add `SET(QT_NO_KEYWORDS ON)`. --- # Source: rpicam_apps_building.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Advanced `rpicam-apps` === Build `libcamera` and `rpicam-apps` Build `libcamera` and `rpicam-apps` for yourself for the following benefits: * You can pick up the latest enhancements and features. * `rpicam-apps` can be compiled with extra optimisation for Raspberry Pi 3 and Raspberry Pi 4 devices running a 32-bit OS. * You can include optional OpenCV and/or TFLite post-processing stages, or add your own. * You can customise or add your own applications derived from `rpicam-apps` ==== Remove pre-installed `rpicam-apps` Raspberry Pi OS includes a pre-installed copy of `rpicam-apps`. Before building and installing your own version of `rpicam-apps`, you must first remove the pre-installed version. Run the following command to remove the `rpicam-apps` package from your Raspberry Pi: [source,console] ---- $ sudo apt remove --purge rpicam-apps ---- ==== Building `rpicam-apps` without building `libcamera` To build `rpicam-apps` without first rebuilding `libcamera` and `libepoxy`, install `libcamera`, `libepoxy` and their dependencies with `apt`: [source,console] ---- $ sudo apt install -y libcamera-dev libepoxy-dev libjpeg-dev libtiff5-dev libpng-dev libopencv-dev ---- TIP: If you do not need support for the GLES/EGL preview window, omit `libepoxy-dev`. To use the Qt preview window, install the following additional dependencies: [source,console] ---- $ sudo apt install -y qtbase5-dev libqt5core5a libqt5gui5 libqt5widgets5 ---- For xref:camera_software.adoc#libav-integration-with-rpicam-vid[`libav`] support in `rpicam-vid`, install the following additional dependencies: [source,console] ---- $ sudo apt install libavcodec-dev libavdevice-dev libavformat-dev libswresample-dev ---- If you run Raspberry Pi OS Lite, install `git`: [source,console] ---- $ sudo apt install -y git ---- Next, xref:camera_software.adoc#building-rpicam-apps[build `rpicam-apps`]. ==== Building `libcamera` NOTE: Only build `libcamera` from scratch if you need custom behaviour or the latest features that have not yet reached `apt` repositories. [NOTE] ====== If you run Raspberry Pi OS Lite, begin by installing the following packages: [source,console] ---- $ sudo apt install -y python3-pip git python3-jinja2 ---- ====== First, install the following `libcamera` dependencies: [source,console] ---- $ sudo apt install -y libboost-dev $ sudo apt install -y libgnutls28-dev openssl libtiff5-dev pybind11-dev $ sudo apt install -y qtbase5-dev libqt5core5a libqt5gui5 libqt5widgets5 $ sudo apt install -y meson cmake $ sudo apt install -y python3-yaml python3-ply $ sudo apt install -y libglib2.0-dev libgstreamer-plugins-base1.0-dev ---- Now we're ready to build `libcamera` itself. Download a local copy of Raspberry Pi's fork of `libcamera` from GitHub: [source,console] ---- $ git clone https://github.com/raspberrypi/libcamera.git ---- Navigate into the root directory of the repository: [source,console] ---- $ cd libcamera ---- Next, run `meson` to configure the build environment: [source,console] ---- $ meson setup build --buildtype=release -Dpipelines=rpi/vc4,rpi/pisp -Dipas=rpi/vc4,rpi/pisp -Dv4l2=true -Dgstreamer=enabled -Dtest=false -Dlc-compliance=disabled -Dcam=disabled -Dqcam=disabled -Ddocumentation=disabled -Dpycamera=enabled ---- NOTE: You can disable the `gstreamer` plugin by replacing `-Dgstreamer=enabled` with `-Dgstreamer=disabled` during the `meson` build configuration. If you disable `gstreamer`, there is no need to install the `libglib2.0-dev` and `libgstreamer-plugins-base1.0-dev` dependencies. Now, you can build `libcamera` with `ninja`: [source,console] ---- $ ninja -C build ---- Finally, run the following command to install your freshly-built `libcamera` binary: [source,console] ---- $ sudo ninja -C build install ---- TIP: On devices with 1 GB of memory or less, the build may exceed available memory. Append the `-j 1` flag to `ninja` commands to limit the build to a single process. This should prevent the build from exceeding available memory on devices like the Raspberry Pi Zero and the Raspberry Pi 3. `libcamera` does not yet have a stable binary interface. Always build `rpicam-apps` after you build `libcamera`. ==== Building `rpicam-apps` First fetch the necessary dependencies for `rpicam-apps`. [source,console] ---- $ sudo apt install -y cmake libboost-program-options-dev libdrm-dev libexif-dev $ sudo apt install -y meson ninja-build ---- Download a local copy of Raspberry Pi's `rpicam-apps` GitHub repository: [source,console] ---- $ git clone https://github.com/raspberrypi/rpicam-apps.git ---- Navigate into the root directory of the repository: [source,console] ---- $ cd rpicam-apps ---- For desktop-based operating systems like Raspberry Pi OS, configure the `rpicam-apps` build with the following `meson` command: [source,console] ---- $ meson setup build -Denable_libav=enabled -Denable_drm=enabled -Denable_egl=enabled -Denable_qt=enabled -Denable_opencv=disabled -Denable_tflite=disabled -Denable_hailo=disabled ---- For headless operating systems like Raspberry Pi OS Lite, configure the `rpicam-apps` build with the following `meson` command: [source,console] ---- $ meson setup build -Denable_libav=disabled -Denable_drm=enabled -Denable_egl=disabled -Denable_qt=disabled -Denable_opencv=disabled -Denable_tflite=disabled -Denable_hailo=disabled ---- [TIP] ====== * Use `-Dneon_flags=armv8-neon` to enable optimisations for 32-bit OSes on Raspberry Pi 3 or Raspberry Pi 4. * Use `-Denable_opencv=enabled` if you have installed OpenCV and wish to use OpenCV-based post-processing stages. * Use `-Denable_tflite=enabled` if you have installed TensorFlow Lite and wish to use it in post-processing stages. * Use `-Denable_hailo=enabled` if you have installed HailoRT and wish to use it in post-processing stages. ====== You can now build `rpicam-apps` with the following command: [source,console] ---- $ meson compile -C build ---- TIP: On devices with 1 GB of memory or less, the build may exceed available memory. Append the `-j 1` flag to `meson` commands to limit the build to a single process. This should prevent the build from exceeding available memory on devices like the Raspberry Pi Zero and the Raspberry Pi 3. Finally, run the following command to install your freshly-built `rpicam-apps` binary: [source,console] ---- $ sudo meson install -C build ---- [TIP] ==== The command above should automatically update the `ldconfig` cache. If you have trouble accessing your new `rpicam-apps` build, run the following command to update the cache: [source,console] ---- $ sudo ldconfig ---- ==== Run the following command to check that your device uses the new binary: [source,console] ---- $ rpicam-still --version ---- The output should include the date and time of your local `rpicam-apps` build. Finally, follow the `dtoverlay` and display driver instructions in the xref:camera_software.adoc#configuration[Configuration section]. ==== `rpicam-apps` meson flag reference The `meson` build configuration for `rpicam-apps` supports the following flags: `-Dneon_flags=armv8-neon`:: Speeds up certain post-processing features on Raspberry Pi 3 or Raspberry Pi 4 devices running a 32-bit OS. `-Denable_libav=enabled`:: Enables or disables `libav` encoder integration. `-Denable_drm=enabled`:: Enables or disables **DRM/KMS preview rendering**, a preview window used in the absence of a desktop environment. `-Denable_egl=enabled`:: Enables or disables the non-Qt desktop environment-based preview. Disable if your system lacks a desktop environment. `-Denable_qt=enabled`:: Enables or disables support for the Qt-based implementation of the preview window. Disable if you do not have a desktop environment installed or if you have no intention of using the Qt-based preview window. The Qt-based preview is normally not recommended because it is computationally very expensive, however it does work with X display forwarding. `-Denable_opencv=enabled`:: Forces OpenCV-based post-processing stages to link or not link. Requires OpenCV to enable. Defaults to `disabled`. `-Denable_tflite=enabled`:: Enables or disables TensorFlow Lite post-processing stages. Disabled by default. Requires Tensorflow Lite to enable. Depending on how you have built and/or installed TFLite, you may need to tweak the `meson.build` file in the `post_processing_stages` directory. `-Denable_hailo=enabled`:: Enables or disables HailoRT-based post-processing stages. Requires HailoRT to enable. Defaults to `auto`. `-Ddownload_hailo_models=true`:: Downloads and installs models for HailoRT post-processing stages. Requires `wget` to be installed. Defaults to `true`. Each of the above options (except for `neon_flags`) supports the following values: * `enabled`: enables the option, fails the build if dependencies are not available * `disabled`: disables the option * `auto`: enables the option if dependencies are available ==== Building `libepoxy` Rebuilding `libepoxy` should not normally be necessary as this library changes only very rarely. If you do want to build it from scratch, however, please follow the instructions below. Start by installing the necessary dependencies. [source,console] ---- $ sudo apt install -y libegl1-mesa-dev ---- Next, download a local copy of the `libepoxy` repository from GitHub: [source,console] ---- $ git clone https://github.com/anholt/libepoxy.git ---- Navigate into the root directory of the repository: [source,console] ---- $ cd libepoxy ---- Create a build directory at the root level of the repository, then navigate into that directory: [source,console] ---- $ mkdir _build $ cd _build ---- Next, run `meson` to configure the build environment: [source,console] ---- $ meson ---- Now, you can build `libexpoxy` with `ninja`: [source,console] ---- $ ninja ---- Finally, run the following command to install your freshly-built `libepoxy` binary: [source,console] ---- $ sudo ninja install ---- --- # Source: rpicam_apps_getting_help.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Getting help For further help with `libcamera` and the `rpicam-apps`, check the https://forums.raspberrypi.com/viewforum.php?f=43[Raspberry Pi Camera forum]. Before posting: * Make a note of your operating system version (`uname -a`). * Make a note of your `libcamera` and `rpicam-apps` versions (`rpicam-hello --version`). * Report the make and model of the camera module you are using. * Report the software you are trying to use. We don't support third-party camera module vendor software. * Report your Raspberry Pi model, including memory size. * Include any relevant excerpts from the application's console output. If there are specific problems in the camera software (such as crashes), consider https://github.com/raspberrypi/rpicam-apps[creating an issue in the `rpicam-apps` GitHub repository], including the same details listed above. --- # Source: rpicam_apps_intro.adoc *Note: This file could not be automatically converted from AsciiDoc.* == `rpicam-apps` [NOTE] ==== Raspberry Pi OS _Bookworm_ renamed the camera capture applications from ``libcamera-\*`` to ``rpicam-*``. Symbolic links allow users to use the old names for now. **Adopt the new application names as soon as possible.** Raspberry Pi OS versions prior to _Bookworm_ still use the ``libcamera-*`` name. ==== Raspberry Pi supplies a small set of example `rpicam-apps`. These CLI applications, built on top of `libcamera`, capture images and video from a camera. These applications include: * `rpicam-hello`: A "hello world"-equivalent for cameras, which starts a camera preview stream and displays it on the screen. * `rpicam-jpeg`: Runs a preview window, then captures high-resolution still images. * `rpicam-still`: Emulates many of the features of the original `raspistill` application. * `rpicam-vid`: Captures video. * `rpicam-raw`: Captures raw (unprocessed Bayer) frames directly from the sensor. * `rpicam-detect`: Not built by default, but users can build it if they have TensorFlow Lite installed on their Raspberry Pi. Captures JPEG images when certain objects are detected. Recent versions of Raspberry Pi OS include the five basic `rpicam-apps`, so you can record images and videos using a camera even on a fresh Raspberry Pi OS installation. Users can create their own `rpicam`-based applications with custom functionality to suit their own requirements. The https://github.com/raspberrypi/rpicam-apps[`rpicam-apps` source code] is freely available under a BSD-2-Clause licence. === `libcamera` `libcamera` is an open-source software library aimed at supporting camera systems directly from the Linux operating system on Arm processors. Proprietary code running on the Broadcom GPU is minimised. For more information about `libcamera` see the https://libcamera.org[`libcamera` website]. `libcamera` provides a {cpp} API that configures the camera, then allows applications to request image frames. These image buffers reside in system memory and can be passed directly to still image encoders (such as JPEG) or to video encoders (such as H.264). `libcamera` doesn't encode or display images itself: that that functionality, use `rpicam-apps`. You can find the source code in the https://git.linuxtv.org/libcamera.git/[official libcamera repository]. The Raspberry Pi OS distribution uses a https://github.com/raspberrypi/libcamera.git[fork] to control updates. Underneath the `libcamera` core, we provide a custom pipeline handler. `libcamera` uses this layer to drive the sensor and image signal processor (ISP) on the Raspberry Pi. `libcamera` contains a collection of image-processing algorithms (IPAs) including auto exposure/gain control (AEC/AGC), auto white balance (AWB), and auto lens-shading correction (ALSC). Raspberry Pi's implementation of `libcamera` supports the following cameras: * Official cameras: ** OV5647 (V1) ** IMX219 (V2) ** IMX708 (V3) ** IMX477 (HQ) ** IMX500 (AI) ** IMX296 (GS) * Third-party sensors: ** IMX290 ** IMX327 ** IMX378 ** IMX519 ** OV9281 To extend support to a new sensor, https://git.linuxtv.org/libcamera.git/[contribute to `libcamera`]. --- # Source: rpicam_apps_multicam.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Use multiple cameras `rpicam-apps` has basic support for multiple cameras. You can attach multiple cameras to a Raspberry Pi in the following ways: * For Raspberry Pi Compute Modules, you can connect two cameras directly to a Raspberry Pi Compute Module I/O board. See the xref:../computers/compute-module.adoc#attach-a-camera-module[Compute Module documentation] for further details. With this method, you can _use both cameras simultaneously_. * For Raspberry Pi 5, you can connect two cameras directly to the board using the dual MIPI connectors. * For other Raspberry Pi devices with a camera port, you can attach two or more cameras with a Video Mux board such as https://www.arducam.com/product/multi-camera-v2-1-adapter-raspberry-pi/[this third-party product]. Since both cameras are attached to a single Unicam port, _only one camera may be used at a time_. To list all the cameras available on your platform, use the xref:camera_software.adoc#list-cameras[`list-cameras`] option. To choose which camera to use, pass the camera index to the xref:camera_software.adoc#camera[`camera`] option. NOTE: `libcamera` does not yet provide stereoscopic camera support. When running two cameras simultaneously, they must be run in separate processes, meaning there is no way to synchronise 3A operation between them. As a workaround, you could synchronise the cameras through an external sync signal for the HQ (IMX477) camera or use the software camera synchronisation support that is described below, switching the 3A to manual mode if necessary. ==== Software Camera Synchronisation Raspberry Pi's _libcamera_ implementation has the ability to synchronise the frames of different cameras using only software. This will cause one camera to adjust it's frame timing so as to coincide as closely as possible with the frames of another camera. No soldering or hardware connections are required, and it will work with all of Raspberry Pi's camera modules, and even third party ones so long as their drivers implement frame duration control correctly. **How it works** The scheme works by designating one camera to be the _server_. The server will broadcast timing messages onto the network at regular intervals, such as once a second. Meanwhile other cameras, known as _clients_, can listen to these messages whereupon they may lengthen or shorten frame times slightly so as to pull them into sync with the server. This process is continual, though after the first adjustment, subsequent adjustments are normally small. The client cameras may be attached to the same Raspberry Pi device as the server, or they may be attached to different Raspberry Pis on the same network. The camera model on the clients may match the server, or they may be different. Clients and servers need to be set running at the same nominal framerate (such as 30fps). Note that there is no back-channel from the clients back to the server. It is solely the clients' responsibility to be up and running in time to match the server, and the server is completely unaware whether clients have synchronised successfully, or indeed whether there are any clients at all. In normal operation, running the same make of camera on the same Raspberry Pi, we would expect the frame start times of the camera images to match within "several tens of microseconds". When the camera models are different this could be significantly larger as the cameras will probably not be able to match framerates exactly and will therefore be continually drifting apart (and brought back together with every timing message). When cameras are on different devices, the system clocks should be synchronised using NTP (normally the case by default for Raspberry Pi OS), or if this is insufficiently precise, another protocol like PTP might be used. Any discrepancy between system clocks will feed directly into extra error in frame start times - even though the advertised timestamps on the frames will not tell you. **The Server** The server, as previously explained, broadcasts timing messages onto the network, by default every second. The server will run for a fixed number of frames, by default 100, after which it will inform the camera application on the device that the "synchronisation point" has been reached. At this moment, the application will start using the frames, so in the case of `rpicam-vid`, they will start being encoded and recorded. Recall that the behaviour and even existence of clients has no bearing on this. If required, there can be several servers on the same network so long as they are broadcasting timing messages to different network addresses. Clients, of course, will have to be configured to listen for the correct address. **Clients** Clients listen out for server timing messages and, when they receive one, will shorten or lengthen a camera frame duration by the required amount so that subsequent frames will start, as far as possible, at the same moment as the server's. The clients learn the correct "synchronisation point" from the server's messages, and just like the server, will signal the camera application at the same moment that it should start using the frames. So in the case of `rpicam-vid`, this is once again the moment at which frames will start being recorded. Normally it makes sense to start clients _before_ the server, as the clients will simply wait (the "synchronisation point" has not been reached) until a server is seen broadcasting onto the network. This obviously avoids timing problems where a server might reach its "synchronisation point" even before all the clients have been started! **Usage in `rpicam-vid`** We can use software camera synchronisation with `rpicam-vid` to record videos that are synchronised frame-by-frame. We're going to assume we have two cameras attached, and we're going to use camera 0 as the server, and camera 1 as the client. `rpicam-vid` defaults to a fixed 30 frames per second, which will be fine for us. First we should start the client: [source,console] ---- $ rpicam-vid -n -t 20s --camera 1 --codec libav -o client.mp4 --sync client ---- Note the `--sync client` parameter. This will record for 20 seconds but _only_ once the synchronisation point has been reached. If necessary, it will wait indefinitely for the first server message. To start the server: [source,console] ---- $ rpicam-vid -n -t 20s --camera 0 --codec libav -o server.mp4 --sync server ---- This too will run for 20 seconds counting from when the synchronisation point is reached and the recording starts. With the default synchronisation settings (100 frames at 30fps) this means there will be just over 3 seconds for clients to get synchronised. The server's broadcast address and port, the frequency of the timing messages and the number of frames to wait for clients to synchronise, can all be changed in the camera tuning file. Clients only pay attention to the broadcast address here which should match the server's; the other information will be ignored. Please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide] for more information. In practical operation there are a few final points to be aware of: * The fixed framerate needs to be below the maximum framerate at which the camera can operate (in the camera mode that is being used). This is because the synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can. * Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option]), or reducing system load. --- # Source: rpicam_apps_packages.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Install `libcamera` and `rpicam-apps` Raspberry Pi provides two `rpicam-apps` packages: * `rpicam-apps` contains full applications with support for previews using a desktop environment. This package is pre-installed in Raspberry Pi OS. * `rpicam-apps-lite` omits desktop environment support, and only makes the DRM preview available. This package is pre-installed in Raspberry Pi OS Lite. ==== Dependencies `rpicam-apps` depends on library packages named `library-name`, where `` is the ABI version. Your package manager should install these automatically. ==== Dev packages You can rebuild `rpicam-apps` without building `libcamera` and `libepoxy` from scratch. For more information, see xref:camera_software.adoc#building-rpicam-apps-without-building-libcamera[Building `rpicam-apps` without rebuilding `libcamera`]. --- # Source: rpicam_apps_post_processing.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Post-processing with `rpicam-apps` `rpicam-apps` share a common post-processing framework. This allows them to pass the images received from the camera system through a number of custom image-processing and image-analysis routines. Each such routine is known as a _stage_. To run post-processing stages, supply a JSON file instructing the application which stages and options to apply. You can find example JSON files that use the built-in post-processing stages in the https://github.com/raspberrypi/rpicam-apps/tree/main/assets[`assets` folder of the `rpicam-apps` repository]. For example, the **negate** stage turns light pixels dark and dark pixels light. Because the negate stage is basic, requiring no configuration, `negate.json` just names the stage: [source,json] ---- { "negate": {} } ---- To apply the negate stage to an image, pass `negate.json` to the `post-process-file` option: [source,console] ---- $ rpicam-hello --post-process-file negate.json ---- To run multiple post-processing stages, create a JSON file that contains multiple stages as top-level keys. For example, to the following configuration runs the Sobel stage, then the negate stage: [source,json] ---- { "sobel_cv": { "ksize": 5 }, "negate": {} } ---- The xref:camera_software.adoc#sobel_cv-stage[Sobel stage] uses OpenCV, hence the `cv` suffix. It has a user-configurable parameter, `ksize`, that specifies the kernel size of the filter to be used. In this case, the Sobel filter produces bright edges on a black background, and the negate stage turns this into dark edges on a white background. .A negated Sobel filter. image::images/sobel_negate.jpg[A negated Sobel filter] Some stages, such as `negate`, alter the image in some way. Other stages analyse the image to generate metadata. Post-processing stages can pass this metadata to other stages and even the application. To improve performance, image analysis often uses reduced resolution. `rpicam-apps` provide a dedicated low-resolution feed directly from the ISP. NOTE: The `rpicam-apps` supplied with Raspberry Pi OS do not include OpenCV and TensorFlow Lite. As a result, certain post-processing stages that rely on them are disabled. To use these stages, xref:camera_software.adoc#build-libcamera-and-rpicam-apps[re-compile `rpicam-apps`]. On a Raspberry Pi 3 or 4 running a 32-bit kernel, compile with the `-DENABLE_COMPILE_FLAGS_FOR_TARGET=armv8-neon` flag to speed up certain stages. === Built-in stages ==== `negate` stage This stage turns light pixels dark and dark pixels light. The `negate` stage has no user-configurable parameters. Default `negate.json` file: [source,json] ---- { "negate" : {} } ---- Run the following command to use this stage file with `rpicam-hello`: [source,console] ---- $ rpicam-hello --post-process-file negate.json ---- Example output: .A negated image. image::images/negate.jpg[A negated image] ==== `hdr` stage This stage emphasises details in images using High Dynamic Range (HDR) and Dynamic Range Compression (DRC). DRC uses a single image, while HDR combines multiple images for a similar result. Parameters fall into three groups: the LP filter, global tonemapping, and local contrast. This stage applies a smoothing filter to the fully-processed input images to generate a low pass (LP) image. It then generates the high pass (HP) image from the diff of the original and LP images. Then, it applies a global tonemap to the LP image and adds it back to the HP image. This process helps preserve local contrast. You can configure this stage with the following parameters: [cols="1,3a"] |=== | `num_frames` | The number of frames to accumulate; for DRC, use 1; for HDR, try 8 | `lp_filter_strength` | The coefficient of the low pass IIR filter. | `lp_filter_threshold` | A piecewise linear function that relates pixel level to the threshold of meaningful detail | `global_tonemap_points` | Points in the input image histogram mapped to targets in the output range where we wish to move them. Uses the following sub-configuration: * an inter-quantile mean (`q` and `width`) * a target as a proportion of the full output range (`target`) * maximum (`max_up`) and minimum (`max_down`) gains to move the measured inter-quantile mean, to prevents the image from changing image too drastically | `global_tonemap_strength` | Strength of application of the global tonemap | `local_pos_strength` | A piecewise linear function that defines the gain applied to local contrast when added back to the tonemapped LP image, for positive (bright) detail | `local_neg_strength` | A piecewise linear function that defines the gain applied to local contrast when added back to the tonemapped LP image, for negative (dark) detail | `local_tonemap_strength` | An overall gain applied to all local contrast that is added back | `local_colour_scale` | A factor that allows the output colours to be affected more or less strongly |=== To control processing strength, changing the `global_tonemap_strength` and `local_tonemap_strength` parameters. Processing a single image takes between two and three seconds for a 12 MP image on a Raspberry Pi 4. When accumulating multiple frames, this stage sends only the processed image to the application. Default `drc.json` file for DRC: [source,json] ---- { "hdr" : { "num_frames" : 1, "lp_filter_strength" : 0.2, "lp_filter_threshold" : [ 0, 10.0 , 2048, 205.0, 4095, 205.0 ], "global_tonemap_points" : [ { "q": 0.1, "width": 0.05, "target": 0.15, "max_up": 1.5, "max_down": 0.7 }, { "q": 0.5, "width": 0.05, "target": 0.5, "max_up": 1.5, "max_down": 0.7 }, { "q": 0.8, "width": 0.05, "target": 0.8, "max_up": 1.5, "max_down": 0.7 } ], "global_tonemap_strength" : 1.0, "local_pos_strength" : [ 0, 6.0, 1024, 2.0, 4095, 2.0 ], "local_neg_strength" : [ 0, 4.0, 1024, 1.5, 4095, 1.5 ], "local_tonemap_strength" : 1.0, "local_colour_scale" : 0.9 } } ---- Example: .Image without DRC processing image::images/nodrc.jpg[Image without DRC processing] Run the following command to use this stage file with `rpicam-still`: [source,console] ---- $ rpicam-still -o test.jpg --post-process-file drc.json ---- .Image with DRC processing image::images/drc.jpg[Image with DRC processing] Default `hdr.json` file for HDR: [source,json] ---- { "hdr" : { "num_frames" : 8, "lp_filter_strength" : 0.2, "lp_filter_threshold" : [ 0, 10.0 , 2048, 205.0, 4095, 205.0 ], "global_tonemap_points" : [ { "q": 0.1, "width": 0.05, "target": 0.15, "max_up": 5.0, "max_down": 0.5 }, { "q": 0.5, "width": 0.05, "target": 0.45, "max_up": 5.0, "max_down": 0.5 }, { "q": 0.8, "width": 0.05, "target": 0.7, "max_up": 5.0, "max_down": 0.5 } ], "global_tonemap_strength" : 1.0, "local_pos_strength" : [ 0, 6.0, 1024, 2.0, 4095, 2.0 ], "local_neg_strength" : [ 0, 4.0, 1024, 1.5, 4095, 1.5 ], "local_tonemap_strength" : 1.0, "local_colour_scale" : 0.8 } } ---- Example: .Image without HDR processing image::images/nohdr.jpg[Image without HDR processing] Run the following command to use this stage file with `rpicam-still`: [source,console] ---- $ rpicam-still -o test.jpg --ev -2 --denoise cdn_off --post-process-file hdr.json ---- .Image with HDR processing image::images/hdr.jpg[Image with DRC processing] ==== `motion_detect` stage The `motion_detect` stage analyses frames from the low-resolution image stream. You must configure the low-resolution stream to use this stage. The stage detects motion by comparing a region of interest (ROI) in the frame to the corresponding part of a previous frame. If enough pixels change between frames, this stage indicates the motion in metadata under the `motion_detect.result` key. This stage has no dependencies on third-party libraries. You can configure this stage with the following parameters, passing dimensions as a proportion of the low-resolution image size between 0 and 1: [cols="1,3"] |=== | `roi_x` | x-offset of the region of interest for the comparison (proportion between 0 and 1) | `roi_y` | y-offset of the region of interest for the comparison (proportion between 0 and 1) | `roi_width` | Width of the region of interest for the comparison (proportion between 0 and 1) | `roi_height` | Height of the region of interest for the comparison (proportion between 0 and 1) | `difference_m` | Linear coefficient used to construct the threshold for pixels being different | `difference_c` | Constant coefficient used to construct the threshold for pixels being different according to `threshold = difference_m * pixel_value + difference_c` | `frame_period` | The motion detector will run only this many frames | `hskip` | The pixel subsampled by this amount horizontally | `vksip` | The pixel subsampled by this amount vertically | `region_threshold` | The proportion of pixels (regions) which must be categorised as different for them to count as motion | `verbose` | Print messages to the console, including when the motion status changes |=== Default `motion_detect.json` configuration file: [source,json] ---- { "motion_detect" : { "roi_x" : 0.1, "roi_y" : 0.1, "roi_width" : 0.8, "roi_height" : 0.8, "difference_m" : 0.1, "difference_c" : 10, "region_threshold" : 0.005, "frame_period" : 5, "hskip" : 2, "vskip" : 2, "verbose" : 0 } } ---- Adjust the differences and the threshold to make the algorithm more or less sensitive. To improve performance, use the `hskip` and `vskip` parameters. Run the following command to use this stage file with `rpicam-hello`: [source,console] ---- $ rpicam-hello --lores-width 128 --lores-height 96 --post-process-file motion_detect.json ---- --- # Source: rpicam_apps_post_processing_opencv.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Post-processing with OpenCV NOTE: These stages require an OpenCV installation. You may need to xref:camera_software.adoc#build-libcamera-and-rpicam-apps[rebuild `rpicam-apps` with OpenCV support]. ==== `sobel_cv` stage This stage applies a https://en.wikipedia.org/wiki/Sobel_operator[Sobel filter] to an image to emphasise edges. You can configure this stage with the following parameters: [cols="1,3"] |=== | `ksize` | Kernel size of the Sobel filter |=== Default `sobel_cv.json` file: [source,json] ---- { "sobel_cv" : { "ksize": 5 } } ---- Example: .Using a Sobel filter to emphasise edges. image::images/sobel.jpg[Using a Sobel filter to emphasise edges] ==== `face_detect_cv` stage This stage uses the OpenCV Haar classifier to detect faces in an image. It returns face location metadata under the key `face_detect.results` and optionally draws the locations on the image. You can configure this stage with the following parameters: [cols=",3] |=== | `cascade_name` | Name of the file where the Haar cascade can be found | `scaling_factor` | Determines range of scales at which the image is searched for faces | `min_neighbors` | Minimum number of overlapping neighbours required to count as a face | `min_size` | Minimum face size | `max_size` | Maximum face size | `refresh_rate` | How many frames to wait before trying to re-run the face detector | `draw_features` | Whether to draw face locations on the returned image |=== The `face_detect_cv` stage runs only during preview and video capture. It ignores still image capture. It runs on the low resolution stream with a resolution between 320×240 and 640×480 pixels. Default `face_detect_cv.json` file: [source,json] ---- { "face_detect_cv" : { "cascade_name" : "/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml", "scaling_factor" : 1.1, "min_neighbors" : 2, "min_size" : 32, "max_size" : 256, "refresh_rate" : 1, "draw_features" : 1 } } ---- Example: .Drawing detected faces onto an image. image::images/face_detect.jpg[Drawing detected faces onto an image] ==== `annotate_cv` stage This stage writes text into the top corner of images using the same `%` substitutions as the xref:camera_software.adoc#info-text[`info-text`] option. Interprets xref:camera_software.adoc#info-text[`info-text` directives] first, then passes any remaining tokens to https://www.man7.org/linux/man-pages/man3/strftime.3.html[`strftime`]. For example, to achieve a datetime stamp on the video, pass `%F %T %z`: * `%F` displays the ISO-8601 date (2023-03-07) * `%T` displays 24h local time (e.g. "09:57:12") * `%z` displays the timezone relative to UTC (e.g. "-0800") This stage does not output any metadata, but it writes metadata found in `annotate.text` in place of anything in the JSON configuration file. This allows other post-processing stages to write text onto images. You can configure this stage with the following parameters: [cols="1,3"] |=== | `text` | The text string to be written | `fg` | Foreground colour | `bg` | Background colour | `scale` | A number proportional to the size of the text | `thickness` | A number that determines the thickness of the text | `alpha` | The amount of alpha to apply when overwriting background pixels |=== Default `annotate_cv.json` file: [source,json] ---- { "annotate_cv" : { "text" : "Frame %frame exp %exp ag %ag dg %dg", "fg" : 255, "bg" : 0, "scale" : 1.0, "thickness" : 2, "alpha" : 0.3 } } ---- Example: .Writing camera and date information onto an image with annotations. image::images/annotate.jpg[Writing camera and date information onto an image with annotations] --- # Source: rpicam_apps_post_processing_tflite.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Post-Processing with TensorFlow Lite ==== Prerequisites These stages require TensorFlow Lite (TFLite) libraries that export the {cpp} API. From Raspberry Pi OS _Trixie_ onwards, Raspberry Pi builds and distributes a TFLite package, which can be installed with the following command: [source,console] ---- $ sudo apt install libtensorflow-lite-dev ---- After installing, you must xref:camera_software.adoc#build-libcamera-and-rpicam-apps[recompile `rpicam-apps` with TensorFlow Lite support]. ==== `object_classify_tf` stage Download: https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz[] `object_classify_tf` uses a Google MobileNet v1 model to classify objects in the camera image. This stage requires a https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_1.0_224_frozen.tgz[`labels.txt` file]. You can configure this stage with the following parameters: [cols="1,3"] |=== | `top_n_results` | The number of results to show | `refresh_rate` | The number of frames that must elapse between model runs | `threshold_high` | Confidence threshold (between 0 and 1) where objects are considered as being present | `threshold_low` | Confidence threshold which objects must drop below before being discarded as matches | `model_file` | Filepath of the TFLite model file | `labels_file` | Filepath of the file containing the object labels | `display_labels` | Whether to display the object labels on the image; inserts `annotate.text` metadata for the `annotate_cv` stage to render | `verbose` | Output more information to the console |=== Example `object_classify_tf.json` file: [source,json] ---- { "object_classify_tf" : { "top_n_results" : 2, "refresh_rate" : 30, "threshold_high" : 0.6, "threshold_low" : 0.4, "model_file" : "/home//models/mobilenet_v1_1.0_224_quant.tflite", "labels_file" : "/home//models/labels.txt", "display_labels" : 1 }, "annotate_cv" : { "text" : "", "fg" : 255, "bg" : 0, "scale" : 1.0, "thickness" : 2, "alpha" : 0.3 } } ---- The stage operates on a low resolution stream image of size 224×224. Run the following command to use this stage file with `rpicam-hello`: [source,console] ---- $ rpicam-hello --post-process-file object_classify_tf.json --lores-width 224 --lores-height 224 ---- .Object classification of a desktop computer and monitor. image::images/classify.jpg[Object classification of a desktop computer and monitor] ==== `pose_estimation_tf` stage Download: https://github.com/Qengineering/TensorFlow_Lite_Pose_RPi_32-bits[] `pose_estimation_tf` uses a Google MobileNet v1 model to detect pose information. You can configure this stage with the following parameters: [cols="1,3"] |=== | `refresh_rate` | The number of frames that must elapse between model runs | `model_file` | Filepath of the TFLite model file | `verbose` | Output extra information to the console |=== Use the separate `plot_pose_cv` stage to draw the detected pose onto the main image. You can configure the `plot_pose_cv` stage with the following parameters: [cols="1,3"] |=== | `confidence_threshold` | Confidence threshold determining how much to draw; can be less than zero |=== Example `pose_estimation_tf.json` file: [source,json] ---- { "pose_estimation_tf" : { "refresh_rate" : 5, "model_file" : "posenet_mobilenet_v1_100_257x257_multi_kpt_stripped.tflite" }, "plot_pose_cv" : { "confidence_threshold" : -0.5 } } ---- The stage operates on a low resolution stream image of size 257×257. **Because YUV420 images must have even dimensions, round up to 258×258 for YUV420 images.** Run the following command to use this stage file with `rpicam-hello`: [source,console] ---- $ rpicam-hello --post-process-file pose_estimation_tf.json --lores-width 258 --lores-height 258 ---- .Pose estimation of an adult human male. image::images/pose.jpg[Pose estimation of an adult human male] ==== `object_detect_tf` stage Download: https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip[] `object_detect_tf` uses a Google MobileNet v1 SSD (Single Shot Detector) model to detect and label objects. You can configure this stage with the following parameters: [cols="1,3"] |=== | `refresh_rate` | The number of frames that must elapse between model runs | `model_file` | Filepath of the TFLite model file | `labels_file` | Filepath of the file containing the list of labels | `confidence_threshold` | Confidence threshold before accepting a match | `overlap_threshold` | Determines the amount of overlap between matches for them to be merged as a single match. | `verbose` | Output extra information to the console |=== Use the separate `object_detect_draw_cv` stage to draw the detected objects onto the main image. You can configure the `object_detect_draw_cv` stage with the following parameters: [cols="1,3"] |=== | `line_thickness` | Thickness of the bounding box lines | `font_size` | Size of the font used for the label |=== Example `object_detect_tf.json` file: [source,json] ---- { "object_detect_tf" : { "number_of_threads" : 2, "refresh_rate" : 10, "confidence_threshold" : 0.5, "overlap_threshold" : 0.5, "model_file" : "/home//models/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29/detect.tflite", "labels_file" : "/home//models/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29/labelmap.txt", "verbose" : 1 }, "object_detect_draw_cv" : { "line_thickness" : 2 } } ---- The stage operates on a low resolution stream image of size 300×300. Run the following command, which passes a 300×300 crop to the detector from the centre of the 400×300 low resolution image, to use this stage file with `rpicam-hello`: [source,console] ---- $ rpicam-hello --post-process-file object_detect_tf.json --lores-width 400 --lores-height 300 ---- .Detecting apple and cat objects. image::images/detection.jpg[Detecting apple and cat objects] ==== `segmentation_tf` stage Download: https://tfhub.dev/tensorflow/lite-model/deeplabv3/1/metadata/2?lite-format=tflite[] `segmentation_tf` uses a Google MobileNet v1 model. This stage requires a label file, found at the `assets/segmentation_labels.txt`. This stage runs on an image of size 257×257. Because YUV420 images must have even dimensions, the low resolution image should be at least 258 pixels in both width and height. The stage adds a vector of 257×257 values to the image metadata where each value indicates the categories a pixel belongs to. You can optionally draw a representation of the segmentation into the bottom right corner of the image. You can configure this stage with the following parameters: [cols="1,3"] |=== | `refresh_rate` | The number of frames that must elapse between model runs | `model_file` | Filepath of the TFLite model file | `labels_file` | Filepath of the file containing the list of labels | `threshold` | When verbose is set, prints when the number of pixels with any label exceeds this number | `draw` | Draws the segmentation map into the bottom right hand corner of the image | `verbose` | Output extra information to the console |=== Example `segmentation_tf.json` file: [source,json] ---- { "segmentation_tf" : { "number_of_threads" : 2, "refresh_rate" : 10, "model_file" : "/home//models/lite-model_deeplabv3_1_metadata_2.tflite", "labels_file" : "/home//models/segmentation_labels.txt", "draw" : 1, "verbose" : 1 } } ---- This example takes a camera image and reduces it to 258×258 pixels in size. This stage even works when squashing a non-square image without cropping. This example enables the segmentation map in the bottom right hand corner. Run the following command to use this stage file with `rpicam-hello`: [source,console] ---- $ rpicam-hello --post-process-file segmentation_tf.json --lores-width 258 --lores-height 258 --viewfinder-width 1024 --viewfinder-height 1024 ---- .Running segmentation and displaying the results on a map in the bottom right. image::images/segmentation.jpg[Running segmentation and displaying the results on a map in the bottom right] --- # Source: rpicam_apps_post_processing_writing.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Write your own post-processing stages With the `rpicam-apps` post-processing framework, users can create their own custom post-processing stages. You can even include algorithms and routines from OpenCV and TensorFlow Lite. ==== Basic post-processing stages To create your own post-processing stage, derive a new class from the `PostProcessingStage` class. All post-processing stages must implement the following member functions: `char const *Name() const`:: Returns the name of the stage. Matched against stages listed in the JSON post-processing configuration file. `void Read(boost::property_tree::ptree const ¶ms)`:: Reads the stage's configuration parameters from a provided JSON file. `void AdjustConfig(std::string const &use_case, StreamConfiguration *config)`:: Gives stages a chance to influence the configuration of the camera. Frequently empty for stages with no need to configure the camera. `void Configure()`:: Called just after the camera has been configured to allocate resources and check that the stage has access to necessary streams. `void Start()`:: Called when the camera starts. Frequently empty for stages with no need to configure the camera. `bool Process(CompletedRequest &completed_request)`:: Presents completed camera requests for post-processing. This is where you'll implement pixel manipulations and image analysis. Returns `true` if the post-processing framework should **not** deliver this request to the application. `void Stop()`:: Called when the camera stops. Used to shut down any active processing on asynchronous threads. `void Teardown()`:: Called when the camera configuration is destroyed. Use this as a deconstructor where you can de-allocate resources set up in the `Configure` method. In any stage implementation, call `RegisterStage` to register your stage with the system. Don't forget to add your stage to `meson.build` in the post-processing folder. When writing your own stages, keep these tips in mind: * The `Process` method blocks the imaging pipeline. If it takes too long, the pipeline will stutter. **Always delegate time-consuming algorithms to an asynchronous thread.** * When delegating work to another thread, you must copy the image buffers. For applications like image analysis that don't require full resolution, try using a low-resolution image stream. * The post-processing framework _uses parallelism to process every frame_. This improves throughput. However, some OpenCV and TensorFlow Lite functions introduce another layer of parallelism _within_ each frame. Consider serialising calls within each frame since post-processing already takes advantage of multiple threads. * Most streams, including the low resolution stream, use the YUV420 format. You may need to convert this to another format for certain OpenCV or TFLite functions. * For the best performance, always alter images in-place. For a basic example, see https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/negate_stage.cpp[`negate_stage.cpp`]. This stage negates an image by turning light pixels dark and dark pixels light. This stage is mostly derived class boiler-plate, achieving the negation logic in barely half a dozen lines of code. For another example, see https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/sobel_cv_stage.cpp[`sobel_cv_stage.cpp`], which implements a Sobel filter in just a few lines of OpenCV functions. ==== TensorFlow Lite stages For stages that use TensorFlow Lite (TFLite), derive a new class from the `TfStage` class. This class delegates model execution to a separate thread to prevent camera stuttering. The `TfStage` class implements all the `PostProcessingStage` member functions post-processing stages must normally implement, _except for_ ``Name``. All `TfStage`-derived stages must implement the ``Name`` function, and should implement some or all of the following virtual member functions: `void readExtras()`:: The base class reads the named model and certain other parameters like the `refresh_rate`. Use this function this to read extra parameters for the derived stage and check that the loaded model is correct (e.g. has right input and output dimensions). `void checkConfiguration()`:: The base class fetches the low resolution stream that TFLite operates on and the full resolution stream in case the derived stage needs it. Use this function to check for the streams required by your stage. If your stage can't access one of the required streams, you might skip processing or throw an error. `void interpretOutputs()`:: Use this function to read and interpret the model output. _Runs in the same thread as the model when the model completes_. `void applyResults()`:: Use this function to apply results of the model (could be several frames old) to the current frame. Typically involves attaching metadata or drawing. _Runs in the main thread, before frames are delivered_. For an example implementation, see the https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/object_classify_tf_stage.cpp[`object_classify_tf_stage.cpp`] and https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/pose_estimation_tf_stage.cpp[`pose_estimation_tf_stage.cpp`]. --- # Source: rpicam_apps_writing.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Write your own `rpicam` apps `rpicam-apps` does not provide all of the camera-related features that anyone could ever need. Instead, these applications are small and flexible. Users who require different behaviour can implement it themselves. All of the `rpicam-apps` use an event loop that receives messages when a new set of frames arrives from the camera system. This set of frames is called a `CompletedRequest`. The `CompletedRequest` contains: * all images derived from that single camera frame: often a low-resolution image and a full-size output * metadata from the camera and post-processing systems ==== `rpicam-hello` `rpicam-hello` is the smallest application, and the best place to start understanding `rpicam-apps` design. It extracts the `CompletedRequestPtr`, a shared pointer to the `CompletedRequest`, from the message, and forwards it to the preview window: [cpp] ---- CompletedRequestPtr &completed_request = std::get(msg.payload); app.ShowPreview(completed_request, app.ViewfinderStream()); ---- Every `CompletedRequest` must be recycled back to the camera system so that the buffers can be reused. Otherwise, the camera runs out of buffers for new camera frames. This recycling process happens automatically when no references to the `CompletedRequest` remain using {cpp}'s _shared pointer_ and _custom deleter_ mechanisms. As a result, `rpicam-hello` must complete the following actions to recycle the buffer space: * The event loop must finish a cycle so the message (`msg` in the code), which holds a reference to `CompletedRequest`, can be replaced with the next message. This discards the reference to the previous message. * When the event thread calls `ShowPreview`, it passes the preview thread a reference to the `CompletedRequest`. The preview thread discards the last `CompletedRequest` instance each time `ShowPreview` is called. ==== `rpicam-vid` `rpicam-vid` is similar to `rpicam-hello` with encoding added to the event loop. Before the event loop starts, `rpicam-vid` configures the encoder with a callback. The callback handles the buffer containing the encoded image data. In the code below, we send the buffer to the `Output` object. `Output` could write it to a file or stream it, depending on the options specified. [cpp] ---- app.SetEncodeOutputReadyCallback(std::bind(&Output::OutputReady, output.get(), _1, _2, _3, _4)); ---- Because this code passes the encoder a reference to the `CompletedRequest`, `rpicam-vid` can't recycle buffer data until the event loop, preview window, _and_ encoder all discard their references. ==== `rpicam-raw` `rpicam-raw` is similar to `rpicam-vid`. It also encodes during the event loop. However, `rpicam-raw` uses a dummy encoder called the `NullEncoder`. This uses the input image as the output buffer instead of encoding it with a codec. `NullEncoder` only discards its reference to the buffer once the output callback completes. This guarantees that the buffer isn't recycled before the callback processes the image. `rpicam-raw` doesn't forward anything to the preview window. `NullEncoder` is possibly overkill in `rpicam-raw`. We could probably send images straight to the `Output` object, instead. However, `rpicam-apps` need to limit work in the event loop. `NullEncoder` demonstrates how you can handle most processes (even holding onto a reference) in other threads. ==== `rpicam-jpeg` `rpicam-jpeg` starts the camera in preview mode in the usual way. When the timer completes, it stops the preview and switches to still capture: [cpp] ---- app.StopCamera(); app.Teardown(); app.ConfigureStill(); app.StartCamera(); ---- The event loop grabs the first frame returned from still mode and saves this as a JPEG. --- # Source: rpicam_configuration.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Configuration Most use cases work automatically with no need to alter the camera configuration. However, some common use cases do require configuration tweaks, including: * Third-party cameras (the manufacturer's instructions should explain necessary configuration changes, if any) * Using a non-standard driver or overlay with an official Raspberry Pi camera Raspberry Pi OS recognises the following overlays in `/boot/firmware/config.txt`. |=== | Camera Module | In `/boot/firmware/config.txt` | V1 camera (OV5647) | `dtoverlay=ov5647` | V2 camera (IMX219) | `dtoverlay=imx219` | HQ camera (IMX477) | `dtoverlay=imx477` | GS camera (IMX296) | `dtoverlay=imx296` | Camera Module 3 (IMX708) | `dtoverlay=imx708` | IMX290 and IMX327 | `dtoverlay=imx290,clock-frequency=74250000` or `dtoverlay=imx290,clock-frequency=37125000` (both modules share the imx290 kernel driver; refer to instructions from the module vendor for the correct frequency) | IMX378 | `dtoverlay=imx378` | OV9281 | `dtoverlay=ov9281` |=== To use one of these overlays, you must disable automatic camera detection. To disable automatic detection, set `camera_auto_detect=0` in `/boot/firmware/config.txt`. If `config.txt` already contains a line assigning an `camera_auto_detect` value, change the value to `0`. Reboot your Raspberry Pi with `sudo reboot` to load your changes. If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or one of the Compute Modules, for example), then you can specify the use of camera connector 0 by adding `,cam0` to the `dtoverlay` that you used from the table above. If you do not add this, it will default to checking camera connector 1. Note that for official Raspberry Pi camera modules connected to SBCs (not Compute Modules), auto-detection will correctly identify all the cameras connected to your device. [[tuning-files]] ==== Tweak camera behaviour with tuning files Raspberry Pi's `libcamera` implementation includes a **tuning file** for each camera. This file controls algorithms and hardware to produce the best image quality. `libcamera` can only determine the sensor in use, not the module. As a result, some modules require a tuning file override. Use the xref:camera_software.adoc#tuning-file[`tuning-file`] option to specify an override. You can also copy and alter existing tuning files to customise camera behaviour. For example, the no-IR-filter (NoIR) versions of sensors use Auto White Balance (AWB) settings different from the standard versions. On a Raspberry Pi 5 or later, you can specify the the NoIR tuning file for the IMX219 sensor with the following command: [source,console] ---- $ rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/pisp/imx219_noir.json ---- NOTE: Raspberry Pi models prior to Raspberry Pi 5 use different tuning files. On those devices, use the files stored in `/usr/share/libcamera/ipa/rpi/vc4/` instead. `libcamera` maintains tuning files for a number of cameras, including third-party models. For instance, you can find the tuning file for the Soho Enterprises SE327M12 in `se327m12.json`. --- # Source: rpicam_detect.adoc *Note: This file could not be automatically converted from AsciiDoc.* === `rpicam-detect` NOTE: Raspberry Pi OS does not include `rpicam-detect`. However, you can build `rpicam-detect` if you have xref:camera_software.adoc#post-processing-with-tensorflow-lite[installed TensorFlow Lite]. For more information, see the xref:camera_software.adoc#build-libcamera-and-rpicam-apps[`rpicam-apps` build instructions]. Don't forget to pass `-Denable_tflite=enabled` when you run `meson`. `rpicam-detect` displays a preview window and monitors the contents using a Google MobileNet v1 SSD (Single Shot Detector) neural network trained to identify about 80 classes of objects using the Coco dataset. `rpicam-detect` recognises people, cars, cats and many other objects. Whenever `rpicam-detect` detects a target object, it captures a full-resolution JPEG. Then it returns to monitoring preview mode. See the xref:camera_software.adoc#object_detect_tf-stage[TensorFlow Lite object detector] section for general information on model usage. For example, you might spy secretly on your cats while you are away with: [source,console] ---- $ rpicam-detect -t 0 -o cat%04d.jpg --lores-width 400 --lores-height 300 --post-process-file object_detect_tf.json --object cat ---- --- # Source: rpicam_hello.adoc *Note: This file could not be automatically converted from AsciiDoc.* === `rpicam-hello` `rpicam-hello` briefly displays a preview window containing the video feed from a connected camera. To use `rpicam-hello` to display a preview window for five seconds, run the following command in a terminal: [source,console] ---- $ rpicam-hello ---- You can pass an optional duration (in milliseconds) with the xref:camera_software.adoc#timeout[`timeout`] option. A value of `0` runs the preview indefinitely: [source,console] ---- $ rpicam-hello --timeout 0 ---- Use `Ctrl+C` in the terminal or the close button on the preview window to stop `rpicam-hello`. ==== Display an image sensor preview Most of the `rpicam-apps` display a preview image in a window. If there is no active desktop environment, the preview draws directly to the display using the Linux Direct Rendering Manager (DRM). Otherwise, `rpicam-apps` attempt to use the desktop environment. Both paths use zero-copy GPU buffer sharing: as a result, X forwarding is _not_ supported. If you run the X window server and want to use X forwarding, pass the xref:camera_software.adoc#qt-preview[`qt-preview`] flag to render the preview window in a https://en.wikipedia.org/wiki/Qt_(software)[Qt] window. The Qt preview window uses more resources than the alternatives. NOTE: Older systems using Gtk2 may, when linked with OpenCV, produce `Glib-GObject` errors and fail to show the Qt preview window. In this case edit the file `/etc/xdg/qt5ct/qt5ct.conf` as root and replace the line containing `style=gtk2` with `style=gtk3`. To suppress the preview window entirely, pass the xref:camera_software.adoc#nopreview[`nopreview`] flag: [source,console] ---- $ rpicam-hello -n ---- The xref:camera_software.adoc#info-text[`info-text`] option displays image information on the window title bar using `%` directives. For example, the following command displays the current red and blue gain values: [source,console] ---- $ rpicam-hello --info-text "red gain %rg, blue gain %bg" ---- For a full list of directives, see the xref:camera_software.adoc#info-text[`info-text` reference]. --- # Source: rpicam_jpeg.adoc *Note: This file could not be automatically converted from AsciiDoc.* === `rpicam-jpeg` `rpicam-jpeg` helps you capture images on Raspberry Pi devices. To capture a full resolution JPEG image and save it to a file named `test.jpg`, run the following command: [source,console] ---- $ rpicam-jpeg --output test.jpg ---- You should see a preview window for five seconds. Then, `rpicam-jpeg` captures a full resolution JPEG image and saves it. Use the xref:camera_software.adoc#timeout[`timeout`] option to alter display time of the preview window. The xref:camera_software.adoc#width-and-height[`width` and `height`] options change the resolution of the saved image. For example, the following command displays the preview window for 2 seconds, then captures and saves an image with a resolution of 640×480 pixels: [source,console] ---- $ rpicam-jpeg --output test.jpg --timeout 2000 --width 640 --height 480 ---- --- # Source: rpicam_options_common.adoc *Note: This file could not be automatically converted from AsciiDoc.* == `rpicam-apps` options reference === Common options The following options apply across all the `rpicam-apps` with similar or identical semantics, unless otherwise noted. To pass one of the following options to an application, prefix the option name with `--`. If the option requires a value, pass the value immediately after the option name, separated by a single space. If the value contains a space, surround the value in quotes. Some options have shorthand aliases, for example `-h` instead of `--help`. Use these shorthand aliases instead of the full option name to save space and time at the expense of readability. ==== `help` Alias: `-h` Prints the full set of options, along with a brief synopsis of each option. Does not accept a value. ==== `version` Prints out version strings for `libcamera` and `rpicam-apps`. Does not accept a value. Example output: ---- rpicam-apps build: ca559f46a97a 27-09-2021 (14:10:24) libcamera build: v0.0.0+3058-c29143f7 ---- ==== `list-cameras` Lists the detected cameras attached to your Raspberry Pi and their available sensor modes. Does not accept a value. Sensor mode identifiers have the following form: `S_ : ` Crop is specified in native sensor pixels (even in pixel binning mode) as `(, )/×`. `(x, y)` specifies the location of the crop window of size `width × height` in the sensor array. For example, the following output displays information about an `IMX219` sensor at index 0 and an `IMX477` sensor at index 1: ---- Available cameras ----------------- 0 : imx219 [3280x2464] (/base/soc/i2c0mux/i2c@1/imx219@10) Modes: 'SRGGB10_CSI2P' : 640x480 [206.65 fps - (1000, 752)/1280x960 crop] 1640x1232 [41.85 fps - (0, 0)/3280x2464 crop] 1920x1080 [47.57 fps - (680, 692)/1920x1080 crop] 3280x2464 [21.19 fps - (0, 0)/3280x2464 crop] 'SRGGB8' : 640x480 [206.65 fps - (1000, 752)/1280x960 crop] 1640x1232 [41.85 fps - (0, 0)/3280x2464 crop] 1920x1080 [47.57 fps - (680, 692)/1920x1080 crop] 3280x2464 [21.19 fps - (0, 0)/3280x2464 crop] 1 : imx477 [4056x3040] (/base/soc/i2c0mux/i2c@1/imx477@1a) Modes: 'SRGGB10_CSI2P' : 1332x990 [120.05 fps - (696, 528)/2664x1980 crop] 'SRGGB12_CSI2P' : 2028x1080 [50.03 fps - (0, 440)/4056x2160 crop] 2028x1520 [40.01 fps - (0, 0)/4056x3040 crop] 4056x3040 [10.00 fps - (0, 0)/4056x3040 crop] ---- For the IMX219 sensor in the above example: * all modes have an `RGGB` Bayer ordering * all modes provide either 8-bit or 10-bit CSI2 packed readout at the listed resolutions ==== `camera` Selects the camera to use. Specify an index from the xref:camera_software.adoc#list-cameras[list of available cameras]. ==== `config` Alias: `-c` Specify a file containing CLI options and values. Consider a file named `example_configuration.txt` that contains the following text, specifying options and values as key-value pairs, one option per line, long (non-alias) option names only: ---- timeout=99000 verbose= ---- TIP: Omit the leading `--` that you normally pass on the command line. For flags that lack a value, such as `verbose` in the above example, you must include a trailing `=`. You could then run the following command to specify a timeout of 99000 milliseconds and verbose output: [source,console] ---- $ rpicam-hello --config example_configuration.txt ---- ==== `timeout` Alias: `-t` Default value: 5000 milliseconds (5 seconds) Specify how long the application runs before closing. This value is interpreted as a number of milliseconds unless an optional suffix is used to change the unit. The suffix may be one of: * `min` - minutes * `s` or `sec` - seconds * `ms` - milliseconds (the default if no suffix used) * `us` - microseconds * `ns` - nanoseconds. This time applies to both video recording and preview windows. When capturing a still image, the application shows a preview window for the length of time specified by the `timeout` parameter before capturing the output image. To run the application indefinitely, specify a value of `0`. Floating point values are also permitted. Example: `rpicam-hello -t 0.5min` would run for 30 seconds. ==== `preview` Alias: `-p` Sets the location (x,y coordinates) and size (w,h dimensions) of the desktop or DRM preview window. Does not affect the resolution or aspect ratio of images requested from the camera. Scales image size and pillar or letterboxes image aspect ratio to fit within the preview window. Pass the preview window dimensions in the following comma-separated form: `x,y,w,h` Example: `rpicam-hello --preview 100,100,500,500` image::images/preview_window.jpg[Letterboxed preview image] ==== `fullscreen` Alias: `-f` Forces the preview window to use the entire screen with no border or title bar. Scales image size and pillar or letterboxes image aspect ratio to fit within the entire screen. Does not accept a value. ==== `qt-preview` Uses the Qt preview window, which consumes more resources than the alternatives, but supports X window forwarding. Incompatible with the xref:camera_software.adoc#fullscreen[`fullscreen`] flag. Does not accept a value. ==== `nopreview` Alias: `-n` Causes the application to _not_ display a preview window at all. Does not accept a value. ==== `info-text` Default value: `"#%frame (%fps fps) exp %exp ag %ag dg %dg"` Sets the supplied string as the title of the preview window when running in a desktop environment. Supports the following image metadata substitutions: |=== | Directive | Substitution | `%frame` | Sequence number of the frame. | `%fps` | Instantaneous frame rate. | `%exp` | Shutter speed used to capture the image, in microseconds. | `%ag` | Analogue gain applied to the image in the sensor. | `%dg` | Digital gain applied to the image by the ISP. | `%rg` | Gain applied to the red component of each pixel. | `%bg` | Gain applied to the blue component of each pixel. | `%focus` | Focus metric for the image, where a larger value implies a sharper image. | `%lp` | Current lens position in dioptres (1 / distance in metres). | `%afstate` | Autofocus algorithm state (`idle`, `scanning`, `focused` or `failed`). |=== image::images/focus.jpg[Image showing focus measure] ==== `width` and `height` Each accepts a single number defining the dimensions, in pixels, of the captured image. For `rpicam-still`, `rpicam-jpeg` and `rpicam-vid`, specifies output resolution. For `rpicam-raw`, specifies raw frame resolution. For cameras with a 2×2 binned readout mode, specifying a resolution equal to or smaller than the binned mode captures 2×2 binned raw frames. For `rpicam-hello`, has no effect. Examples: * `rpicam-vid -o test.h264 --width 1920 --height 1080` captures 1080p video. * `rpicam-still -r -o test.jpg --width 2028 --height 1520` captures a 2028×1520 resolution JPEG. If used with the HQ camera, uses 2×2 binned mode, so the raw file (`test.dng`) contains a 2028×1520 raw Bayer image. ==== `viewfinder-width` and `viewfinder-height` Each accepts a single number defining the dimensions, in pixels, of the image displayed in the preview window. Does not effect the preview window dimensions, since images are resized to fit. Does not affect captured still images or videos. ==== `mode` Allows you to specify a camera mode in the following colon-separated format: `:::`. The system selects the closest available option for the sensor if there is not an exact match for a provided value. You can use the packed (`P`) or unpacked (`U`) packing formats. Impacts the format of stored videos and stills, but not the format of frames passed to the preview window. Bit-depth and packing are optional. Bit-depth defaults to 12. Packing defaults to `P` (packed). For information about the bit-depth, resolution, and packing options available for your sensor, see xref:camera_software.adoc#list-cameras[`list-cameras`]. Examples: * `4056:3040:12:P` - 4056×3040 resolution, 12 bits per pixel, packed. * `1632:1224:10` - 1632×1224 resolution, 10 bits per pixel. * `2592:1944:10:U` - 2592×1944 resolution, 10 bits per pixel, unpacked. * `3264:2448` - 3264×2448 resolution. ===== Packed format details The packed format uses less storage for pixel data. _On Raspberry Pi 4 and earlier devices_, the packed format packs pixels using the MIPI CSI-2 standard. This means: * 10-bit camera modes pack 4 pixels into 5 bytes. The first 4 bytes contain the 8 most significant bits (MSBs) of each pixel, and the final byte contains the 4 pairs of least significant bits (LSBs). * 12-bit camera modes pack 2 pixels into 3 bytes. The first 2 bytes contain the 8 most significant bits (MSBs) of each pixel, and the final byte contains the 4 least significant bits (LSBs) of both pixels. _On Raspberry Pi 5 and later devices_, the packed format compresses pixel values with a visually lossless compression scheme into 8 bits (1 byte) per pixel. ===== Unpacked format details The unpacked format provides pixel values that are much easier to manually manipulate, at the expense of using more storage for pixel data. On all devices, the unpacked format uses 2 bytes per pixel. _On Raspberry Pi 4 and earlier devices_, applications apply zero padding at the *most significant end*. In the unpacked format, a pixel from a 10-bit camera mode cannot exceed the value 1023. _On Raspberry Pi 5 and later devices_, applications apply zero padding at the *least significant end*, so images use the full 16-bit dynamic range of the pixel depth delivered by the sensor. ==== `viewfinder-mode` Identical to the `mode` option, but it applies to the data passed to the preview window. For more information, see the xref:camera_software.adoc#mode[`mode` documentation]. ==== `lores-width` and `lores-height` Delivers a second, lower-resolution image stream from the camera, scaled down to the specified dimensions. Each accepts a single number defining the dimensions, in pixels, of the lower-resolution stream. Available for preview and video modes. Not available for still captures. If you specify a aspect ratio different from the normal resolution stream, generates non-square pixels. For `rpicam-vid`, disables extra colour-denoise processing. Useful for image analysis when combined with xref:camera_software.adoc#post-processing-with-rpicam-apps[image post-processing]. ==== `hflip` Flips the image horizontally. Does not accept a value. ==== `vflip` Flips the image vertically. Does not accept a value. ==== `rotation` Rotates the image extracted from the sensor. Accepts only the values 0 or 180. ==== `roi` Crops the image extracted from the full field of the sensor. Accepts four decimal values, _ranged 0 to 1_, in the following format: `,,,h>`. Each of these values represents a percentage of the available width and heights as a decimal between 0 and 1. These values define the following proportions: * ``: X coordinates to skip before extracting an image * ``: Y coordinates to skip before extracting an image * ``: image width to extract * ``: image height to extract Defaults to `0,0,1,1` (starts at the first X coordinate and the first Y coordinate, uses 100% of the image width, uses 100% of the image height). Examples: * `rpicam-hello --roi 0.25,0.25,0.5,0.5` selects exactly a half of the total number of pixels cropped from the centre of the image (skips the first 25% of X coordinates, skips the first 25% of Y coordinates, uses 50% of the total image width, uses 50% of the total image height). * `rpicam-hello --roi 0,0,0.25,0.25` selects exactly a quarter of the total number of pixels cropped from the top left of the image (skips the first 0% of X coordinates, skips the first 0% of Y coordinates, uses 25% of the image width, uses 25% of the image height). ==== `hdr` Default value: `off` Runs the camera in HDR mode. If passed without a value, assumes `auto`. Accepts one of the following values: * `off` - Disables HDR. * `auto` - Enables HDR on supported devices. Uses the sensor's built-in HDR mode if available. If the sensor lacks a built-in HDR mode, uses on-board HDR mode, if available. * `single-exp` - Uses on-board HDR mode, if available, even if the sensor has a built-in HDR mode. If on-board HDR mode is not available, disables HDR. Raspberry Pi 5 and later devices have an on-board HDR mode. To check for built-in HDR modes in a sensor, pass this option in addition to xref:camera_software.adoc#list-cameras[`list-cameras`]. === Camera control options The following options control image processing and algorithms that affect camera image quality. ==== `sharpness` Sets image sharpness. Accepts a numeric value along the following spectrum: * `0.0` applies no sharpening * values greater than `0.0`, but less than `1.0` apply less than the default amount of sharpening * `1.0` applies the default amount of sharpening * values greater than `1.0` apply extra sharpening ==== `contrast` Specifies the image contrast. Accepts a numeric value along the following spectrum: * `0.0` applies minimum contrast * values greater than `0.0`, but less than `1.0` apply less than the default amount of contrast * `1.0` applies the default amount of contrast * values greater than `1.0` apply extra contrast ==== `brightness` Specifies the image brightness, added as an offset to all pixels in the output image. Accepts a numeric value along the following spectrum: * `-1.0` applies minimum brightness (black) * `0.0` applies standard brightness * `1.0` applies maximum brightness (white) For many use cases, prefer xref:camera_software.adoc#ev[`ev`]. ==== `saturation` Specifies the image colour saturation. Accepts a numeric value along the following spectrum: * `0.0` applies minimum saturation (gray scale) * values greater than `0.0`, but less than `1.0` apply less than the default amount of saturation * `1.0` applies the default amount of saturation * values greater than `1.0` apply extra saturation ==== `ev` Specifies the https://en.wikipedia.org/wiki/Exposure_value[exposure value (EV)] compensation of the image in stops. Accepts a numeric value that controls target values passed to the Automatic Exposure/Gain Control (AEC/AGC) processing algorithm along the following spectrum: * `-10.0` applies minimum target values * `0.0` applies standard target values * `10.0` applies maximum target values ==== `shutter` Specifies the exposure time, using the shutter, in _microseconds_. Gain can still vary when you use this option. If the camera runs at a framerate so fast it does not allow for the specified exposure time (for instance, a framerate of 1fps and an exposure time of 10000 microseconds), the sensor will use the maximum exposure time allowed by the framerate. For a list of minimum and maximum shutter times for official cameras, see the xref:../accessories/camera.adoc#hardware-specification[camera hardware documentation]. Values above the maximum result in undefined behaviour. ==== `gain` Alias: `--analoggain` Sets the combined analogue and digital gain. When the sensor driver can provide the requested gain, only uses analogue gain. When analogue gain reaches the maximum value, the ISP applies digital gain. Accepts a numeric value. For a list of analogue gain limits, for official cameras, see the xref:../accessories/camera.adoc#hardware-specification[camera hardware documentation]. Sometimes, digital gain can exceed 1.0 even when the analogue gain limit is not exceeded. This can occur in the following situations: * Either of the colour gains drops below 1.0, which will cause the digital gain to settle to 1.0/min(red_gain,blue_gain). This keeps the total digital gain applied to any colour channel above 1.0 to avoid discolouration artefacts. * Slight variances during Automatic Exposure/Gain Control (AEC/AGC) changes. ==== `metering` Default value: `centre` Sets the metering mode of the Automatic Exposure/Gain Control (AEC/AGC) algorithm. Accepts the following values: * `centre` - centre weighted metering * `spot` - spot metering * `average` - average or whole frame metering * `custom` - custom metering mode defined in the camera tuning file For more information on defining a custom metering mode, and adjusting region weights in existing metering modes, see the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning guide for the Raspberry Pi cameras and libcamera]. ==== `exposure` Sets the exposure profile. Changing the exposure profile should not affect the image exposure. Instead, different modes adjust gain settings to achieve the same net result. Accepts the following values: * `sport`: short exposure, larger gains * `normal`: normal exposure, normal gains * `long`: long exposure, smaller gains You can edit exposure profiles using tuning files. For more information, see the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning guide for the Raspberry Pi cameras and libcamera]. ==== `awb` Sets the Auto White Balance (AWB) mode. Accepts the following values: |=== | Mode name | Colour temperature range | `auto` | 2500K to 8000K | `incandescent` | 2500K to 3000K | `tungsten` | 3000K to 3500K | `fluorescent` | 4000K to 4700K | `indoor` | 3000K to 5000K | `daylight` | 5500K to 6500K | `cloudy` | 7000K to 8500K | `custom` | A custom range defined in the tuning file. |=== These values are only approximate: values could vary according to the camera tuning. No mode fully disables AWB. Instead, you can fix colour gains with xref:camera_software.adoc#awbgains[`awbgains`]. For more information on AWB modes, including how to define a custom one, see the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning guide for the Raspberry Pi cameras and libcamera]. ==== `awbgains` Sets a fixed red and blue gain value to be used instead of an Auto White Balance (AWB) algorithm. Set non-zero values to disable AWB. Accepts comma-separated numeric input in the following format: `,` ==== `denoise` Default value: `auto` Sets the denoising mode. Accepts the following values: * `auto`: Enables standard spatial denoise. Uses extra-fast colour denoise for video, and high-quality colour denoise for images. Enables no extra colour denoise in the preview window. * `off`: Disables spatial and colour denoise. * `cdn_off`: Disables colour denoise. * `cdn_fast`: Uses fast colour denoise. * `cdn_hq`: Uses high-quality colour denoise. Not appropriate for video/viewfinder due to reduced throughput. Even fast colour denoise can lower framerates. High quality colour denoise _significantly_ lowers framerates. ==== `tuning-file` Specifies the camera tuning file. The tuning file allows you to control many aspects of image processing, including the Automatic Exposure/Gain Control (AEC/AGC), Auto White Balance (AWB), colour shading correction, colour processing, denoising and more. Accepts a tuning file path as input. For more information about tuning files, see xref:camera_software.adoc#tuning-files[Tuning Files]. ==== `autofocus-mode` Default value: `default` Specifies the autofocus mode. Accepts the following values: * `default`: puts the camera into continuous autofocus mode unless xref:camera_software.adoc#lens-position[`lens-position`] or xref:camera_software.adoc#autofocus-on-capture[`autofocus-on-capture`] override the mode to manual * `manual`: does not move the lens at all unless manually configured with xref:camera_software.adoc#lens-position[`lens-position`] * `auto`: only moves the lens for an autofocus sweep when the camera starts or just before capture if xref:camera_software.adoc#autofocus-on-capture[`autofocus-on-capture`] is also used * `continuous`: adjusts the lens position automatically as the scene changes This option is only supported for certain camera modules. ==== `autofocus-range` Default value: `normal` Specifies the autofocus range. Accepts the following values: * `normal`: focuses from reasonably close to infinity * `macro`: focuses only on close objects, including the closest focal distances supported by the camera * `full`: focus on the entire range, from the very closest objects to infinity This option is only supported for certain camera modules. ==== `autofocus-speed` Default value: `normal` Specifies the autofocus speed. Accepts the following values: * `normal`: changes the lens position at normal speed * `fast`: changes the lens position quickly This option is only supported for certain camera modules. ==== `autofocus-window` Specifies the autofocus window within the full field of the sensor. Accepts four decimal values, _ranged 0 to 1_, in the following format: `,,,h>`. Each of these values represents a percentage of the available width and heights as a decimal between 0 and 1. These values define the following proportions: * ``: X coordinates to skip before applying autofocus * ``: Y coordinates to skip before applying autofocus * ``: autofocus area width * ``: autofocus area height The default value uses the middle third of the output image in both dimensions (1/9 of the total image area). Examples: * `rpicam-hello --autofocus-window 0.25,0.25,0.5,0.5` selects exactly half of the total number of pixels cropped from the centre of the image (skips the first 25% of X coordinates, skips the first 25% of Y coordinates, uses 50% of the total image width, uses 50% of the total image height). * `rpicam-hello --autofocus-window 0,0,0.25,0.25` selects exactly a quarter of the total number of pixels cropped from the top left of the image (skips the first 0% of X coordinates, skips the first 0% of Y coordinates, uses 25% of the image width, uses 25% of the image height). This option is only supported for certain camera modules. ==== `lens-position` Default value: `default` Moves the lens to a fixed focal distance, normally given in dioptres (units of 1 / _distance in metres_). Accepts the following spectrum of values: * `0.0`: moves the lens to the "infinity" position * Any other `number`: moves the lens to the 1 / `number` position. For example, the value `2.0` would focus at approximately 0.5m * `default`: move the lens to a default position which corresponds to the hyperfocal position of the lens Lens calibration is imperfect, so different camera modules of the same model may vary. ==== `verbose` Alias: `-v` Default value: `1` Sets the verbosity level. Accepts the following values: * `0`: no output * `1`: normal output * `2`: verbose output === Output file options ==== `output` Alias: `-o` Sets the name of the file used to record images or video. Besides plaintext file names, accepts the following special values: * `-`: write to stdout. * `udp://` (prefix): a network address for UDP streaming. * `tcp://` (prefix): a network address for TCP streaming. * Include the `%d` directive in the file name to replace the directive with a count that increments for each opened file. This directive supports standard C format directive modifiers. Examples: * `rpicam-vid -t 100000 --segment 10000 -o chunk%04d.h264` records a 100 second file in 10 second segments, where each file includes an incrementing four-digit counter padded with leading zeros: e.g. `chunk0001.h264`, `chunk0002.h264`, etc. * `rpicam-vid -t 0 --inline -o udp://192.168.1.13:5000` streams H.264 video to network address 192.168.1.13 using UDP on port 5000. ==== `wrap` Sets a maximum value for the counter used by the xref:camera_software.adoc#output[`output`] `%d` directive. The counter resets to zero after reaching this value. Accepts a numeric value. ==== `flush` Flushes output files to disk as soon as a frame finishes writing, instead of waiting for the system to handle it. Does not accept a value. ==== `post-process-file` Specifies a JSON file that configures the post-processing applied by the imaging pipeline. This applies to camera images _before_ they reach the application. This works similarly to the legacy `raspicam` "image effects". Accepts a file name path as input. Post-processing is a large topic and admits the use of third-party software like OpenCV and TensorFlowLite to analyse and manipulate images. For more information, see xref:camera_software.adoc#post-processing-with-rpicam-apps[post-processing]. ==== `buffer-count` The number of buffers to allocate for still image capture or for video recording. The default value of zero lets each application choose a reasonable number for its own use case (1 for still image capture, and 6 for video recording). Increasing the number can sometimes help to reduce the number of frame drops, particularly at higher framerates. ==== `viewfinder-buffer-count` As the `buffer-count` option, but applies when running in preview mode (that is `rpicam-hello` or the preview, not capture, phase of `rpicam-still`). ==== `metadata` Save captured image metadata to a file or `-` for stdout. The fields in the metadata output will depend on the camera model in use. See also `metadata-format`. ==== `metadata-format` Format to save the metadata in. Accepts the following values: * `txt` for text format * `json` for JSON format In text format, each line will have the form key=value In JSON format, the output is a JSON object. This option does nothing unless `--metadata` is also specified. --- # Source: rpicam_options_detect.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Detection options The command line options specified in this section apply only to object detection using `rpicam-detect`. To pass one of the following options to `rpicam-detect`, prefix the option name with `--`. If the option requires a value, pass the value immediately after the option name, separated by a single space. If the value contains a space, surround the value in quotes. Some options have shorthand aliases, for example `-h` instead of `--help`. Use these shorthand aliases instead of the full option name to save space and time at the expense of readability. ==== `object` Detects objects with the given name, sourced from the model's label file. Accepts a plaintext file name as input. ==== `gap` Wait at least this many frames between captures. Accepts numeric values. --- # Source: rpicam_options_libav.adoc *Note: This file could not be automatically converted from AsciiDoc.* === `libav` options The command line options specified in this section apply only to `libav` video backend. To enable the `libav` backend, pass the xref:camera_software.adoc#codec[`codec`] option the value `libav`. To pass one of the following options to an application, prefix the option name with `--`. If the option requires a value, pass the value immediately after the option name, separated by a single space. If the value contains a space, surround the value in quotes. Some options have shorthand aliases, for example `-h` instead of `--help`. Use these shorthand aliases instead of the full option name to save space and time at the expense of readability. ==== `libav-format` Sets the `libav` output format. Accepts the following values: * `mkv` encoding * `mp4` encoding * `avi` encoding * `h264` streaming * `mpegts` streaming If you do not provide this option, the file extension passed to the xref:camera_software.adoc#output[`output`] option determines the file format. ==== `libav-audio` Enables audio recording. When enabled, you must also specify an xref:camera_software.adoc#audio-codec[`audio-codec`]. Does not accept a value. ==== `audio-codec` Default value: `aac` Selects an audio codec for output. For a list of available codecs, run `ffmpeg -codecs`. ==== `audio-bitrate` Sets the bitrate for audio encoding in bits per second. Accepts numeric input. Example: `rpicam-vid --codec libav -o test.mp4 --audio_codec mp2 --audio-bitrate 16384` (Records audio at 16 kilobits/sec with the mp2 codec) ==== `audio-samplerate` Default value: `0` Sets the audio sampling rate in Hz. Accepts numeric input. `0` uses the input sample rate. ==== `audio-device` Select an ALSA input device for audio recording. For a list of available devices, run the following command: [source,console] ---- $ pactl list | grep -A2 'Source #' | grep 'Name: ' ---- You should see output similar to the following: ---- Name: alsa_output.platform-bcm2835_audio.analog-stereo.monitor Name: alsa_output.platform-fef00700.hdmi.hdmi-stereo.monitor Name: alsa_output.usb-GN_Netcom_A_S_Jabra_EVOLVE_LINK_000736B1214E0A-00.analog-stereo.monitor Name: alsa_input.usb-GN_Netcom_A_S_Jabra_EVOLVE_LINK_000736B1214E0A-00.mono-fallback ---- ==== `av-sync` Shifts the audio sample timestamp by a value in microseconds. Accepts positive and negative numeric values. --- # Source: rpicam_options_still.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Image options The command line options specified in this section apply only to still image output. To pass one of the following options to an application, prefix the option name with `--`. If the option requires a value, pass the value immediately after the option name, separated by a single space. If the value contains a space, surround the value in quotes. Some options have shorthand aliases, for example `-h` instead of `--help`. Use these shorthand aliases instead of the full option name to save space and time at the expense of readability. ==== `quality` Alias: `-q` Default value: `93` Sets the JPEG quality. Accepts a value between `1` and `100`. ==== `exif` Saves extra EXIF tags in the JPEG output file. Only applies to JPEG output. Because of limitations in the `libexif` library, many tags are currently (incorrectly) formatted as ASCII and print a warning in the terminal. This option is necessary to add certain EXIF tags related to camera settings. You can add tags unrelated to camera settings to the output JPEG after recording with https://exiftool.org/[ExifTool]. Example: `rpicam-still -o test.jpg --exif IDO0.Artist=Someone` ==== `timelapse` Records images at the specified interval. Accepts an interval in milliseconds. Combine this setting with xref:camera_software.adoc#timeout[`timeout`] to capture repeated images over time. You can specify separate filenames for each output file using string formatting, e.g. `--output test%d.jpg`. Example: `rpicam-still -t 100000 -o test%d.jpg --timelapse 10000` captures an image every 10 seconds for 100 seconds. ==== `framestart` Configures a starting value for the frame counter accessed in output file names as `%d`. Accepts an integer starting value. ==== `datetime` Uses the current date and time in the output file name, in the form `MMDDhhmmss.jpg`: * `MM` = 2-digit month number * `DD` = 2-digit day number * `hh` = 2-digit 24-hour hour number * `mm` = 2-digit minute number * `ss` = 2-digit second number Does not accept a value. ==== `timestamp` Uses the current system https://en.wikipedia.org/wiki/Unix_time[Unix time] as the output file name. Does not accept a value. ==== `restart` Default value: `0` Configures the restart marker interval for JPEG output. JPEG restart markers can help limit the impact of corruption on JPEG images, and additionally enable the use of multi-threaded JPEG encoding and decoding. Accepts an integer value. ==== `immediate` Captures the image immediately when the application runs. ==== `keypress` Alias: `-k` Captures an image when the xref:camera_software.adoc#timeout[`timeout`] expires or on press of the *Enter* key, whichever comes first. Press the `x` key, then *Enter* to exit without capturing. Does not accept a value. ==== `signal` Captures an image when the xref:camera_software.adoc#timeout[`timeout`] expires or when `SIGUSR1` is received. Use `SIGUSR2` to exit without capturing. Does not accept a value. ==== `thumb` Default value: `320:240:70` Configure the dimensions and quality of the thumbnail with the following format: `` (or `none`, which omits the thumbnail). ==== `encoding` Alias: `-e` Default value: `jpg` Sets the encoder to use for image output. Accepts the following values: * `jpg` - JPEG * `png` - PNG * `bmp` - BMP * `rgb` - binary dump of uncompressed RGB pixels * `yuv420` - binary dump of uncompressed YUV420 pixels This option always determines the encoding, overriding the extension passed to xref:camera_software.adoc#output[`output`]. When using the xref:camera_software.adoc#datetime[`datetime`] and xref:camera_software.adoc#timestamp[`timestamp`] options, this option determines the output file extension. ==== `raw` Alias: `-r` Saves a raw Bayer file in DNG format in addition to the output image. Replaces the output file name extension with `.dng`. You can process these standard DNG files with tools like _dcraw_ or _RawTherapee_. Does not accept a value. The image data in the raw file is exactly what came out of the sensor, with no processing from the ISP or anything else. The EXIF data saved in the file, among other things, includes: * exposure time * analogue gain (the ISO tag is 100 times the analogue gain used) * white balance gains (which are the reciprocals of the "as shot neutral" values) * the colour matrix used by the ISP ==== `latest` Creates a symbolic link to the most recently saved file. Accepts a symbolic link name as input. ==== `autofocus-on-capture` If set, runs an autofocus cycle _just before_ capturing an image. Interacts with the following xref:camera_software.adoc#autofocus-mode[`autofocus_mode`] values: * `default` or `manual`: only runs the capture-time autofocus cycle. * `auto`: runs an additional autofocus cycle when the preview window loads. * `continuous`: ignores this option, instead continually focusing throughout the preview. Does not require a value, but you can pass `1` to enable and `0` to disable. Not passing a value is equivalent to passing `1`. Only supported by some camera modules (such as the _Raspberry Pi Camera Module 3_). --- # Source: rpicam_options_vid.adoc *Note: This file could not be automatically converted from AsciiDoc.* === Video options The command line options specified in this section apply only to video output. To pass one of the following options to an application, prefix the option name with `--`. If the option requires a value, pass the value immediately after the option name, separated by a single space. If the value contains a space, surround the value in quotes. Some options have shorthand aliases, for example `-h` instead of `--help`. Use these shorthand aliases instead of the full option name to save space and time at the expense of readability. ==== `quality` Alias: `-q` Default value: `50` Accepts an MJPEG quality level between 1 and 100. Only applies to videos encoded in the MJPEG format. ==== `bitrate` Alias: `-b` Controls the target bitrate used by the H.264 encoder in bits per second. Only applies to videos encoded in the H.264 format. Impacts the size of the output video. Example: `rpicam-vid -b 10000000 --width 1920 --height 1080 -o test.h264` ==== `intra` Alias: `-g` Default value: `60` Sets the frequency of Iframes (intra frames) in the H.264 bitstream. Accepts a number of frames. Only applies to videos encoded in the H.264 format. ==== `profile` Sets the H.264 profile. Accepts the following values: * `baseline` * `main` * `high` Only applies to videos encoded in the H.264 format. ==== `level` Sets the H.264 level. Accepts the following values: * `4` * `4.1` * `4.2` Only applies to videos encoded in the H.264 format. ==== `codec` Sets the encoder to use for video output. Accepts the following values: * `h264` - use H.264 encoder (the default) * `mjpeg` - use MJPEG encoder * `yuv420` - output uncompressed YUV420 frames. * `libav` - use the libav backend to encode audio and video (for more information, see xref:camera_software.adoc#libav-integration-with-rpicam-vid[`libav`]) ==== `save-pts` WARNING: Raspberry Pi 5 does not support the `save-pts` option. Use `libav` to automatically generate timestamps for container formats instead. Enables frame timestamp output, which allow you to convert the bitstream into a container format using a tool like `mkvmerge`. Accepts a plaintext file name for the timestamp output file. Example: `rpicam-vid -o test.h264 --save-pts timestamps.txt` You can then use the following command to generate an MKV container file from the bitstream and timestamps file: [source,console] ---- $ mkvmerge -o test.mkv --timecodes 0:timestamps.txt test.h264 ---- ==== `keypress` Alias: `-k` Allows the CLI to enable and disable video output using the *Enter* key. Always starts in the recording state unless specified otherwise with xref:camera_software.adoc#initial[`initial`]. Type the `x` key and press *Enter* to exit. Does not accept a value. ==== `signal` Alias: `-s` Allows the CLI to enable and disable video output using `SIGUSR1`. Use `SIGUSR2` to exit. Always starts in the recording state unless specified otherwise with xref:camera_software.adoc#initial[`initial`]. Does not accept a value. ==== `initial` Default value: `record` Specifies whether to start the application with video output enabled or disabled. Accepts the following values: * `record`: Starts with video output enabled. * `pause`: Starts with video output disabled. Use this option with either xref:camera_software.adoc#keypress[`keypress`] or xref:camera_software.adoc#signal[`signal`] to toggle between the two states. ==== `split` When toggling recording with xref:camera_software.adoc#keypress[`keypress`] or xref:camera_software.adoc#signal[`signal`], writes the video output from separate recording sessions into separate files. Does not accept a value. Unless combined with xref:camera_software.adoc#output[`output`] to specify unique names for each file, overwrites each time it writes a file. ==== `segment` Cuts video output into multiple files of the passed duration. Accepts a duration in milliseconds. If passed a very small duration (for instance, `1`), records each frame to a separate output file to simulate burst capture. You can specify separate filenames for each file using string formatting, e.g. `--output test%04d.h264`. ==== `circular` Default value: `4` Writes video recording into a circular buffer in memory. When the application quits, records the circular buffer to disk. Accepts an optional size in megabytes. ==== `inline` Writes a sequence header in every Iframe (intra frame). This can help clients decode the video sequence from any point in the video, instead of just the beginning. Recommended with xref:camera_software.adoc#segment[`segment`], xref:camera_software.adoc#split[`split`], xref:camera_software.adoc#circular[`circular`], and streaming options. Only applies to videos encoded in the H.264 format. Does not accept a value. ==== `listen` Waits for an incoming client connection before encoding video. Intended for network streaming over TCP/IP. Does not accept a value. ==== `frames` Records exactly the specified number of frames. Any non-zero value overrides xref:camera_software.adoc#timeout[`timeout`]. Accepts a nonzero integer. ==== `framerate` Records exactly the specified framerate. Accepts a nonzero integer. ==== `low-latency` On a Pi 5, the `--low-latency` option will reduce the encoding latency, which may be beneficial for real-time streaming applications, in return for (slightly) less good coding efficiency (for example, B frames and arithmetic coding will no longer be used). ==== `sync` Run the camera in software synchronisation mode, where multiple cameras synchronise frames to the same moment in time. The `sync` mode can be set to either `client` or `server`. For more information, please refer to the detailed explanation of xref:camera_software.adoc#software-camera-synchronisation[how software synchronisation works]. --- # Source: rpicam_raw.adoc *Note: This file could not be automatically converted from AsciiDoc.* === `rpicam-raw` `rpicam-raw` records video as raw Bayer frames directly from the sensor. It does not show a preview window. To record a two second raw clip to a file named `test.raw`, run the following command: [source,console] ---- $ rpicam-raw -t 2000 -o test.raw ---- `rpicam-raw` outputs raw frames with no formatting information at all, one directly after another. The application prints the pixel format and image dimensions to the terminal window to help the user interpret the pixel data. By default, `rpicam-raw` outputs raw frames in a single, potentially very large, file. Use the xref:camera_software.adoc#segment[`segment`] option to direct each raw frame to a separate file, using the `%05d` xref:camera_software.adoc#output[directive] to make each frame filename unique: [source,console] ---- $ rpicam-raw -t 2000 --segment 1 -o test%05d.raw ---- With a fast storage device, `rpicam-raw` can write 18 MB 12-megapixel HQ camera frames to disk at 10fps. `rpicam-raw` has no capability to format output frames as DNG files; for that functionality, use xref:camera_software.adoc#rpicam-still[`rpicam-still`]. Use the xref:camera_software.adoc#framerate[`framerate`] option at a level beneath 10 to avoid dropping frames: [source,console] ---- $ rpicam-raw -t 5000 --width 4056 --height 3040 -o test.raw --framerate 8 ---- For more information on the raw formats, see the xref:camera_software.adoc#mode[`mode` documentation]. --- # Source: rpicam_still.adoc *Note: This file could not be automatically converted from AsciiDoc.* === `rpicam-still` `rpicam-still`, like `rpicam-jpeg`, helps you capture images on Raspberry Pi devices. Unlike `rpicam-jpeg`, `rpicam-still` supports many options provided in the legacy `raspistill` application. To capture a full resolution JPEG image and save it to a file named `test.jpg`, run the following command: [source,console] ---- $ rpicam-still --output test.jpg ---- ==== Encoders `rpicam-still` can save images in multiple formats, including `png`, `bmp`, and both RGB and YUV binary pixel dumps. To read these binary dumps, any application reading the files must understand the pixel arrangement. Use the xref:camera_software.adoc#encoding[`encoding`] option to specify an output format. The file name passed to xref:camera_software.adoc#output[`output`] has no impact on the output file type. To capture a full resolution PNG image and save it to a file named `test.png`, run the following command: [source,console] ---- $ rpicam-still --encoding png --output test.png ---- For more information about specifying an image format, see the xref:camera_software.adoc#encoding[`encoding` option reference]. ==== Capture raw images Raw images are the images produced directly by the image sensor, before any processing is applied to them either by the Image Signal Processor (ISP) or CPU. Colour image sensors usually use the Bayer format. Use the xref:camera_software.adoc#raw[`raw`] option to capture raw images. To capture an image, save it to a file named `test.jpg`, and also save a raw version of the image to a file named `test.dng`, run the following command: [source,console] ---- $ rpicam-still --raw --output test.jpg ---- `rpicam-still` saves raw images in the DNG (Adobe Digital Negative) format. To determine the filename of the raw images, `rpicam-still` uses the same name as the output file, with the extension changed to `.dng`. To work with DNG images, use an application like https://en.wikipedia.org/wiki/Dcraw[Dcraw] or https://en.wikipedia.org/wiki/RawTherapee[RawTherapee]. DNG files contain metadata about the image capture, including black levels, white balance information and the colour matrix used by the ISP to produce the JPEG. Use https://exiftool.org/[ExifTool] to view DNG metadata. The following output shows typical metadata stored in a raw image captured by a Raspberry Pi using the HQ camera: ---- File Name : test.dng Directory : . File Size : 24 MB File Modification Date/Time : 2021:08:17 16:36:18+01:00 File Access Date/Time : 2021:08:17 16:36:18+01:00 File Inode Change Date/Time : 2021:08:17 16:36:18+01:00 File Permissions : rw-r--r-- File Type : DNG File Type Extension : dng MIME Type : image/x-adobe-dng Exif Byte Order : Little-endian (Intel, II) Make : Raspberry Pi Camera Model Name : /base/soc/i2c0mux/i2c@1/imx477@1a Orientation : Horizontal (normal) Software : rpicam-still Subfile Type : Full-resolution Image Image Width : 4056 Image Height : 3040 Bits Per Sample : 16 Compression : Uncompressed Photometric Interpretation : Color Filter Array Samples Per Pixel : 1 Planar Configuration : Chunky CFA Repeat Pattern Dim : 2 2 CFA Pattern 2 : 2 1 1 0 Black Level Repeat Dim : 2 2 Black Level : 256 256 256 256 White Level : 4095 DNG Version : 1.1.0.0 DNG Backward Version : 1.0.0.0 Unique Camera Model : /base/soc/i2c0mux/i2c@1/imx477@1a Color Matrix 1 : 0.8545269369 -0.2382823821 -0.09044229197 -0.1890484985 1.063961506 0.1062747385 -0.01334283455 0.1440163847 0.2593136724 As Shot Neutral : 0.4754476844 1 0.413686484 Calibration Illuminant 1 : D65 Strip Offsets : 0 Strip Byte Counts : 0 Exposure Time : 1/20 ISO : 400 CFA Pattern : [Blue,Green][Green,Red] Image Size : 4056x3040 Megapixels : 12.3 Shutter Speed : 1/20 ---- To find the analogue gain, divide the ISO number by 100. The Auto White Balance (AWB) algorithm determines a single calibrated illuminant, which is always labelled `D65`. ==== Capture long exposures To capture very long exposure images, disable the Automatic Exposure/Gain Control (AEC/AGC) and Auto White Balance (AWB). These algorithms will otherwise force the user to wait for a number of frames while they converge. To disable these algorithms, supply explicit values for gain and AWB. Because long exposures take plenty of time already, it often makes sense to skip the preview phase entirely with the xref:camera_software.adoc#immediate[`immediate`] option. To perform a 100 second exposure capture, run the following command: [source,console] ---- $ rpicam-still -o long_exposure.jpg --shutter 100000000 --gain 1 --awbgains 1,1 --immediate ---- To find the maximum exposure times of official Raspberry Pi cameras, see xref:../accessories/camera.adoc#hardware-specification[the camera hardware specification]. ==== Create a time lapse video To create a time lapse video, capture a still image at a regular interval, such as once a minute, then use an application to stitch the pictures together into a video. [tabs] ====== `rpicam-still` time lapse mode:: + To use the built-in time lapse mode of `rpicam-still`, use the xref:camera_software.adoc#timelapse[`timelapse`] option. This option accepts a value representing the period of time you want your Raspberry Pi to wait between captures, in milliseconds. + First, create a directory where you can store your time lapse photos: + [source,console] ---- $ mkdir timelapse ---- + Run the following command to create a time lapse over 30 seconds, recording a photo every two seconds, saving output into `image0000.jpg` through `image0013.jpg`: + [source,console] ---- $ rpicam-still --timeout 30000 --timelapse 2000 -o timelapse/image%04d.jpg ---- `cron`:: + You can also automate time lapses with `cron`. First, create the script, named `timelapse.sh` containing the following commands. Replace the `` placeholder with the name of your user account on your Raspberry Pi: + [source,bash] ---- #!/bin/bash DATE=$(date +"%Y-%m-%d_%H%M") rpicam-still -o /home//timelapse/$DATE.jpg ---- + Then, make the script executable: + [source,console] ---- $ chmod +x timelapse.sh ---- + Create the `timelapse` directory into which you'll save time lapse pictures: + [source,console] ---- $ mkdir timelapse ---- + Open your crontab for editing: + [source,console] ---- $ crontab -e ---- + Once you have the file open in an editor, add the following line to schedule an image capture every minute, replacing the `` placeholder with the username of your primary user account: + ---- * * * * * /home//timelapse.sh 2>&1 ---- + Save and exit, and you should see this message: + ---- crontab: installing new crontab ---- + To stop recording images for the time lapse, run `crontab -e` again and remove the above line from your crontab. ====== ===== Stitch images together Once you have a series of time lapse photos, you probably want to combine them into a video. Use `ffmpeg` to do this on a Raspberry Pi. First, install `ffmpeg`: [source,console] ---- $ sudo apt install ffmpeg ---- Run the following command from the directory that contains the `timelapse` directory to convert your JPEG files into an mp4 video: [source,console] ---- $ ffmpeg -r 10 -f image2 -pattern_type glob -i 'timelapse/*.jpg' -s 1280x720 -vcodec libx264 timelapse.mp4 ---- The command above uses the following parameters: * `-r 10`: sets the frame rate (Hz value) to ten frames per second in the output video * `-f image2`: sets `ffmpeg` to read from a list of image files specified by a pattern * `-pattern_type glob`: use wildcard patterns (globbing) to interpret filename input with `-i` * `-i 'timelapse/*.jpg'`: specifies input files to match JPG files in the `timelapse` directory * `-s 1280x720`: scales to 720p * `-vcodec libx264` use the software x264 encoder. * `timelapse.mp4` The name of the output video file. For more information about `ffmpeg` options, run `ffmpeg --help` in a terminal. --- # Source: rpicam_vid.adoc *Note: This file could not be automatically converted from AsciiDoc.* === `rpicam-vid` `rpicam-vid` helps you capture video on Raspberry Pi devices. `rpicam-vid` displays a preview window and writes an encoded bitstream to the specified output. This produces an unpackaged video bitstream that is not wrapped in any kind of container (such as an mp4 file) format. NOTE: When available, `rpicam-vid` uses hardware H.264 encoding. For example, the following command writes a ten-second video to a file named `test.h264`: [source,console] ---- $ rpicam-vid -t 10s -o test.h264 ---- You can play the resulting file with ffplay and other video players: [source,console] ---- $ ffplay test.h264 ---- [WARNING] ==== Older versions of vlc were able to play H.264 files correctly, but recent versions do not - displaying only a few, or possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format - such as MP4 (see below). ==== On Raspberry Pi 5, you can output to the MP4 container format directly by specifying the `mp4` file extension for your output file: [source,console] ---- $ rpicam-vid -t 10s -o test.mp4 ---- On Raspberry Pi 4, or earlier devices, you can save MP4 files using: [source,console] ---- $ rpicam-vid -t 10s --codec libav -o test.mp4 ---- ==== Encoders `rpicam-vid` supports motion JPEG as well as both uncompressed and unformatted YUV420: [source,console] ---- $ rpicam-vid -t 10000 --codec mjpeg -o test.mjpeg ---- [source,console] ---- $ rpicam-vid -t 10000 --codec yuv420 -o test.data ---- The xref:camera_software.adoc#codec[`codec`] option determines the output format, not the extension of the output file. The xref:camera_software.adoc#segment[`segment`] option breaks output files up into chunks of the segment size (given in milliseconds). This is handy for breaking a motion JPEG stream up into individual JPEG files by specifying very short (1 millisecond) segments. For example, the following command combines segments of 1 millisecond with a counter in the output file name to generate a new filename for each segment: [source,console] ---- $ rpicam-vid -t 10000 --codec mjpeg --segment 1 -o test%05d.jpeg ---- ==== Capture high framerate video To minimise frame drops for high framerate (> 60fps) video, try the following configuration tweaks: * Set the https://en.wikipedia.org/wiki/Advanced_Video_Coding#Levels[H.264 target level] to 4.2 with `--level 4.2`. * Disable software colour denoise processing by setting the xref:camera_software.adoc#denoise[`denoise`] option to `cdn_off`. * Disable the display window with xref:camera_software.adoc#nopreview[`nopreview`] to free up some additional CPU cycles. * Set `force_turbo=1` in xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] to ensure that the CPU clock does not throttle during video capture. For more information, see xref:config_txt.adoc#force_turbo[the `force_turbo` documentation]. * Adjust the ISP output resolution with `--width 1280 --height 720` or something even lower to achieve your framerate target. * On Raspberry Pi 4, you can overclock the GPU to improve performance by adding `gpu_freq=550` or higher in `/boot/firmware/config.txt`. See xref:config_txt.adoc#overclocking[the overclocking documentation] for further details. The following command demonstrates how you might achieve 1280×720 120fps video: [source,console] ---- $ rpicam-vid --level 4.2 --framerate 120 --width 1280 --height 720 --save-pts timestamp.pts -o video.264 -t 10000 --denoise cdn_off -n ---- ==== `libav` integration with `rpicam-vid` `rpicam-vid` can use the `ffmpeg`/`libav` codec backend to encode audio and video streams. You can either save these streams to a file or stream them over the network. `libav` uses hardware H.264 video encoding when present. To enable the `libav` backend, pass `libav` to the xref:camera_software.adoc#codec[`codec`] option: [source,console] ---- $ rpicam-vid --codec libav --libav-format avi --libav-audio --output example.avi ---- ==== Low latency video with the Pi 5 Pi 5 uses software video encoders. These generally output frames with a longer latency than the old hardware encoders, and this can sometimes be an issue for real-time streaming applications. In this case, please add the option `--low-latency` to the `rpicam-vid` command. This will alter certain encoder options to output the encoded frame more quickly. The downside is that coding efficiency is (slightly) less good, and that the processor's multiple cores may be used (slightly) less efficiently. The maximum framerate that can be encoded may be slightly reduced (though it will still easily achieve 1080p30). --- # Source: streaming.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Stream video over a network with `rpicam-apps` This section describes how to stream video over a network using `rpicam-vid`. Whilst it's possible to stream very simple formats without using `libav`, for most applications we recommend using the xref:camera_software.adoc#libav-integration-with-rpicam-vid[`libav` backend]. === UDP To stream video over UDP using a Raspberry Pi as a server, use the following command, replacing the `` placeholder with the IP address of the client or multicast address and replacing the `` placeholder with the port you would like to use for streaming: [source,console] ---- $ rpicam-vid -t 0 -n --inline -o udp://: ---- To view video streamed over UDP using a Raspberry Pi as a client, use the following command, replacing the `` placeholder with the port you would like to stream from: [source,console] ---- $ ffplay udp://@: -fflags nobuffer -flags low_delay -framedrop ---- As noted previously, `vlc` no longer handles unencapsulated H.264 streams. In fact, support for unencapsulated H.264 can generally be quite poor so it is often better to send an MPEG-2 Transport Stream instead. Making use of `libav`, this can be accomplished with: [source,console] ---- $ rpicam-vid -t 0 -n --codec libav --libav-format mpegts -o udp://: ---- In this case, we can also play the stream successfully with `vlc`: [source,console] ---- $ vlc udp://@: ---- === TCP You can also stream video over TCP. As before, we can send an unencapsulated H.264 stream over the network. To use a Raspberry Pi as a server: [source,console] ---- $ rpicam-vid -t 0 -n --inline --listen -o tcp://0.0.0.0: ---- To view video streamed over TCP using a Raspberry Pi as a client, assuming the server is running at 30 frames per second, use the following command: [source,console] ---- $ ffplay tcp://: -vf "setpts=N/30" -fflags nobuffer -flags low_delay -framedrop ---- But as with the UDP examples, it is often preferable to send an MPEG-2 Transport Stream as this is generally better supported. To do this, use: [source,console] ---- $ rpicam-vid -t 0 -n --codec libav --libav-format mpegts -o tcp://0.0.0.0:?listen=1 ---- We can now play this back using a variety of media players, including `vlc`: [source,console] ---- $ vlc tcp://: ---- === RTSP We can use VLC as an RTSP server, however, we must send it an MPEG-2 Transport Stream as it no longer understands unencapsulated H.264: [source,console] ---- $ rpicam-vid -t 0 -n --codec libav --libav-format mpegts -o - | cvlc stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/stream1}' ---- To view video streamed over RTSP using a Raspberry Pi as a client, use the following command: [source,console] ---- $ ffplay rtsp://:8554/stream1 -fflags nobuffer -flags low_delay -framedrop ---- Alternatively, use the following command on a client to stream using VLC: [source,console] ---- $ vlc rtsp://:8554/stream1 ---- If you want to see a preview window on the server, just drop the `-n` option (see xref:camera_software.adoc#nopreview[`nopreview`]). === `libav` and Audio We have already been using `libav` as the backend for network streaming. `libav` allows us to add an audio stream, so long as we're using a format - like the MPEG-2 Transport Stream - that permits audio data. We can take one of our previous commands, like the one for streaming an MPEG-2 Transport Stream over TCP, and simply add the `--libav-audio` option: [source,console] ---- $ rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio -o "tcp://:?listen=1" ---- You can stream over UDP with a similar command: [source,console] ---- $ rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio -o "udp://:" ---- === GStreamer https://gstreamer.freedesktop.org/[GStreamer] is a Linux framework for reading, processing and playing multimedia files. We can also use it in conjunction with `rpicam-vid` for network streaming. This setup uses `rpicam-vid` to output an H.264 bitstream to stdout, though as we've done previously, we're going to encapsulate it in an MPEG-2 Transport Stream for better downstream compatibility. Then, we use the GStreamer `fdsrc` element to receive the bitstream, and extra GStreamer elements to send it over the network. On the server, run the following command to start the stream, replacing the `` placeholder with the IP address of the client or multicast address and replacing the `` placeholder with the port you would like to use for streaming: [source,console] ---- $ rpicam-vid -t 0 -n --codec libav --libav-format mpegts -o - | gst-launch-1.0 fdsrc fd=0 ! udpsink host= port= ---- We could of course use anything (such as vlc) as the client, and the best GStreamer clients for playback are beyond the scope of this document. However, we note that the following pipeline (with the obvious substitutions) would work on a Pi 4 or earlier device: [source,console] ---- $ gst-launch-1.0 udpsrc address= port= ! tsparse ! tsdemux ! h264parse ! queue ! v4l2h264dec ! autovideosink ---- For a Pi 5, replace `v4l2h264dec` by `avdec_h264`. TIP: To test this configuration, run the server and client commands in separate terminals on the same device, using `localhost` as the address. ==== `libcamerasrc` GStreamer element `libcamera` provides a `libcamerasrc` GStreamer element which can be used directly instead of `rpicam-vid`. To use this element, run the following command on the server, replacing the `` placeholder with the IP address of the client or multicast address and replacing the `` placeholder with the port you would like to use for streaming. On a Pi 4 or earlier device, use: [source,console] ---- $ gst-launch-1.0 libcamerasrc ! capsfilter caps=video/x-raw,width=640,height=360,format=NV12,interlace-mode=progressive ! v4l2h264enc extra-controls="controls,repeat_sequence_header=1" ! 'video/x-h264,level=(string)4' ! h264parse ! mpegtsmux ! udpsink host= port= ---- On a Pi 5 you would have to replace `v4l2h264enc extra-controls="controls,repeat_sequence_header=1"` by `x264enc speed-preset=1 threads=1`. On the client we could use the same playback pipeline as we did just above, or other streaming media players. === WebRTC Streaming over WebRTC (for example, to web browsers) is best accomplished using third party software. https://github.com/bluenviron/mediamtx[MediaMTX], for example, includes native Raspberry Pi camera support which makes it easy to use. To install it, download the latest version from the https://github.com/bluenviron/mediamtx/releases[releases] page. Raspberry Pi OS 64-bit users will want the "linux_arm64v8" compressed tar file (ending `.tar.gz`). Unpack it and you will get a `mediamtx` executable and a configuration file called `mediamtx.yml`. It's worth backing up the `mediamtx.yml` file because it documents many Raspberry Pi camera options that you may want to investigate later. To stream the camera, replace the contents of `mediamtx.yml` by: ---- paths: cam: source: rpiCamera ---- and start the `mediamtx` executable. On a browser, enter `http://:8889/cam` into the address bar. If you want MediaMTX to acquire the camera only when the stream is requested, add the following line to the previous `mediamtx.yml`: ---- sourceOnDemand: yes ---- Consult the original `mediamtx.yml` for additional configuration parameters that let you select the image size, the camera mode, the bitrate and so on - just search for `rpi`. ==== Customised image streams with WebRTC MediaMTX is great if you want to stream just the camera images. But what if we want to add some extra information or overlay, or do some extra processing on the images? Before starting, ensure that you've built a version of `rpicam-apps` that includes OpenCV support. Check it by running [source,console] ---- $ rpicam-hello --post-process-file rpicam-apps/assets/annotate_cv.json ---- and looking for the overlaid text information at the top of the image. Next, paste the following into your `mediamtx.yml` file: ---- paths: cam: source: udp://127.0.0.1:1234 ---- Now, start `mediamtx` and then, if you're using a Pi 5, in a new terminal window, enter: [source,console] ---- $ rpicam-vid -t 0 -n --codec libav --libav-video-codec-opts "profile=baseline" --libav-format mpegts -o udp://127.0.0.1:1234?pkt_size=1316 --post-process-file rpicam-apps/assets/annotate_cv.json ---- (On a Pi 4 or earlier device, leave out the `--libav-video-codec-opts "profile=baseline"` part of the command.) On another computer, you can now visit the same address as before, namely `http://:8889/cam`. The reason for specifying "baseline" profile on a Pi 5 is that MediaMTX doesn't support B frames, so we need to stop the encoder from producing them. On earlier devices, with hardware encoders, B frames are never generated so there is no issue. On a Pi 5 you could alternatively remove this option and replace it with `--low-latency` which will also prevent B frames, and produce a (slightly less well compressed) stream with reduced latency. [NOTE] ==== If you notice occasional pauses in the video stream, this may be because the UDP receive buffers on the Pi (passing data from `rpicam-vid` to MediaMTX) are too small. To increase them permantently, add ---- net.core.rmem_default=1000000 net.core.rmem_max=1000000 ---- to your `/etc/sysctl.conf` file (and reboot or run `sudo sysctl -p`). ==== --- # Source: troubleshooting.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Troubleshooting If your Camera Module doesn't work like you expect, try some of the following fixes: * On Raspberry Pi 3 and earlier devices running Raspberry Pi OS _Bullseye_ or earlier: ** To enable hardware-accelerated camera previews, enable *Glamor*. To enable Glamor, enter `sudo raspi-config` in a terminal, select `Advanced Options` > `Glamor` > `Yes`. Then reboot your Raspberry Pi with `sudo reboot`. ** If you see an error related to the display driver, add `dtoverlay=vc4-fkms-v3d` or `dtoverlay=vc4-kms-v3d` to `/boot/config.txt`. Then reboot your Raspberry Pi with `sudo reboot`. * On Raspberry Pi 3 and earlier, the graphics hardware can only support images up to 2048×2048 pixels, which places a limit on the camera images that can be resized into the preview window. As a result, video encoding of images larger than 2048 pixels wide produces corrupted or missing preview images. * On Raspberry Pi 4, the graphics hardware can only support images up to 4096×4096 pixels, which places a limit on the camera images that can be resized into the preview window. As a result, video encoding of images larger than 4096 pixels wide produces corrupted or missing preview images. * The preview window may show display tearing in a desktop environment. This is a known, unfixable issue. * Check that the FFC (Flat Flexible Cable) is firmly seated, fully inserted, and that the contacts face the correct direction. The FFC should be evenly inserted, not angled. * If you use a connector between the camera and your Raspberry Pi, check that the ports on the connector are firmly seated, fully inserted, and that the contacts face the correct direction. * Check to make sure that the FFC (Flat Flexible Cable) is attached to the CSI (Camera Serial Interface), _not_ the DSI (Display Serial Interface). The connector fits into either port, but only the CSI port powers and controls the camera. Look for the `CSI` label printed on the board near the port. * xref:os.adoc#update-software[Update to the latest software.] * Try a different power supply. The Camera Module adds about 200-250mA to the power requirements of your Raspberry Pi. If your power supply is low quality, your Raspberry Pi may not be able to power the Camera module. * If you've checked all the above issues and your Camera Module still doesn't work like you expect, try posting on our forums for more help. --- # Source: v4l2.adoc *Note: This file could not be automatically converted from AsciiDoc.* == V4L2 drivers V4L2 drivers provide a standard Linux interface for accessing camera and codec features. Normally, Linux loads drivers automatically during boot. But in some situations you may need to xref:camera_software.adoc#configuration[load camera drivers explicitly]. === Device nodes when using `libcamera` [cols="1,^3"] |=== | /dev/videoX | Default action | `video0` | Unicam driver for the first CSI-2 receiver | `video1` | Unicam driver for the second CSI-2 receiver | `video10` | Video decode | `video11` | Video encode | `video12` | Simple ISP, can perform conversion and resizing between RGB/YUV formats in addition to Bayer to RGB/YUV conversion | `video13` | Input to fully programmable ISP | `video14` | High resolution output from fully programmable ISP | `video15` | Low result output from fully programmable ISP | `video16` | Image statistics from fully programmable ISP | `video19` | HEVC decode |=== === Use the V4L2 drivers For more information on how to use the V4L2 drivers, see the https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/v4l2.html[V4L2 documentation]. --- # Source: webcams.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Use a USB webcam Most Raspberry Pi devices have dedicated ports for camera modules. Camera modules are high-quality, highly-configurable cameras popular with Raspberry Pi users. However, for many purposes a USB webcam has everything you need to record pictures and videos from your Raspberry Pi. This section explains how to use a USB webcam with your Raspberry Pi. === Install dependencies First, install the `fswebcam` package: [source,console] ---- $ sudo apt install fswebcam ---- Next, add your username to the `video` group, otherwise you may see 'permission denied' errors: [source,console] ---- $ sudo usermod -a -G video ---- To check that the user has been added to the group correctly, use the `groups` command. === Take a photo Run the following command to take a picture using the webcam and save the image to a filename named `image.jpg`: [source,console] ---- $ fswebcam image.jpg ---- You should see output similar to the following: ---- --- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. Adjusting resolution from 384x288 to 352x288. --- Capturing frame... Corrupt JPEG data: 2 extraneous bytes before marker 0xd4 Captured frame in 0.00 seconds. --- Processing captured image... Writing JPEG image to 'image.jpg'. ---- .By default, `fswebcam` uses a low resolution and adds a banner displaying a timestamp. image::images/webcam-image.jpg[By default, `fswebcam` uses a low resolution and adds a banner displaying a timestamp] To specify a different resolution for the captured image, use the `-r` flag, passing a width and height as two numbers separated by an `x`: [source,console] ---- $ fswebcam -r 1280x720 image2.jpg ---- You should see output similar to the following: ---- --- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. --- Capturing frame... Corrupt JPEG data: 1 extraneous bytes before marker 0xd5 Captured frame in 0.00 seconds. --- Processing captured image... Writing JPEG image to 'image2.jpg'. ---- .Specify a resolution to capture a higher quality image. image::images/webcam-image-high-resolution.jpg[Specify a resolution to capture a higher quality image] ==== Remove the banner To remove the banner from the captured image, use the `--no-banner` flag: [source,console] ---- $ fswebcam --no-banner image3.jpg ---- You should see output similar to the following: ---- --- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. --- Capturing frame... Corrupt JPEG data: 2 extraneous bytes before marker 0xd6 Captured frame in 0.00 seconds. --- Processing captured image... Disabling banner. Writing JPEG image to 'image3.jpg'. ---- .Specify `--no-banner` to save the image without the timestamp banner. image::images/webcam-image-no-banner.jpg[Specify `--no-banner` to save the image without the timestamp banner] === Automate image capture Unlike xref:camera_software.adoc#rpicam-apps[`rpicam-apps`], `fswebcam` doesn't have any built-in functionality to substitute timestamps and numbers in output image names. This can be useful when capturing multiple images, since manually editing the file name every time you record an image can be tedious. Instead, use a Bash script to implement this functionality yourself. Create a new file named `webcam.sh` in your home folder. Add the following example code, which uses the `bash` programming language to save images to files with a file name containing the year, month, day, hour, minute, and second: [,bash] ---- #!/bin/bash DATE=$(date +"%Y-%m-%d_%H-%M-%S") fswebcam -r 1280x720 --no-banner $DATE.jpg ---- Then, make the bash script executable by running the following command: [source,console] ---- $ chmod +x webcam.sh ---- Run the script with the following command to capture an image and save it to a file with a timestamp for a name, similar to `2024-05-10_12-06-33.jpg`: [source,console] ---- $ ./webcam.sh ---- You should see output similar to the following: ---- --- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. --- Capturing frame... Corrupt JPEG data: 2 extraneous bytes before marker 0xd6 Captured frame in 0.00 seconds. --- Processing captured image... Disabling banner. Writing JPEG image to '2024-05-10_12-06-33.jpg'. ---- === Capture a time lapse Use `cron` to schedule photo capture at a given interval. With the right interval, such as once a minute, you can capture a time lapse. First, open the cron table for editing: [source,console] ---- $ crontab -e ---- Once you have the file open in an editor, add the following line to the schedule to take a picture every minute, replacing `` with your username: [,bash] ---- * * * * * /home//webcam.sh 2>&1 ---- Save and exit, and you should see the following message: ---- crontab: installing new crontab ---- --- # Source: camera_software.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::camera/camera_usage.adoc[] include::camera/rpicam_apps_intro.adoc[] include::camera/rpicam_hello.adoc[] include::camera/rpicam_jpeg.adoc[] include::camera/rpicam_still.adoc[] include::camera/rpicam_vid.adoc[] include::camera/rpicam_raw.adoc[] include::camera/rpicam_detect.adoc[] include::camera/rpicam_configuration.adoc[] include::camera/rpicam_apps_multicam.adoc[] include::camera/rpicam_apps_packages.adoc[] include::camera/streaming.adoc[] include::camera/rpicam_options_common.adoc[] include::camera/rpicam_options_still.adoc[] include::camera/rpicam_options_vid.adoc[] include::camera/rpicam_options_libav.adoc[] include::camera/rpicam_options_detect.adoc[] include::camera/rpicam_apps_post_processing.adoc[] include::camera/rpicam_apps_post_processing_opencv.adoc[] include::camera/rpicam_apps_post_processing_tflite.adoc[] include::camera/rpicam_apps_post_processing_writing.adoc[] include::camera/rpicam_apps_building.adoc[] include::camera/rpicam_apps_writing.adoc[] include::camera/qt.adoc[] include::camera/libcamera_python.adoc[] include::camera/webcams.adoc[] include::camera/v4l2.adoc[] include::camera/csi-2-usage.adoc[] include::camera/libcamera_differences.adoc[] include::camera/troubleshooting.adoc[] include::camera/rpicam_apps_getting_help.adoc[] --- # Source: cm-bootloader.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Compute Module EEPROM bootloader Since Compute Module 4, Compute Modules use an EEPROM bootloader. This bootloader lives in a small segment of on-board storage instead of the boot partition. As a result, it requires different procedures to update. Before using a Compute Module with an EEPROM bootloader in production, always follow these best practices: * Select a specific bootloader release. Verify that every Compute Module you use has that release. The version in the `usbboot` repo is always a recent stable release. * Configure the boot device by xref:raspberry-pi.adoc#raspberry-pi-bootloader-configuration[setting the `BOOT_ORDER` ]. * Enable hardware write-protection on the bootloader EEPROM to ensure that the bootloader can't be modified on inaccessible products (such as remote or embedded devices). === Flash Compute Module bootloader EEPROM To flash the bootloader EEPROM: . Set up the hardware as you would when xref:../computers/compute-module.adoc#flash-compute-module-emmc[flashing the eMMC], but ensure `EEPROM_nWP` is _not_ pulled low. . Run the following command to write `recovery/pieeprom.bin` to the bootloader EEPROM: + [source,console] ---- $ ./rpiboot -d recovery ---- . Once complete, `EEPROM_nWP` may be pulled low again. === Flash storage devices other than SD cards The Linux-based https://github.com/raspberrypi/usbboot/blob/master/mass-storage-gadget/README.md[`mass-storage-gadget`] supports flashing of NVMe, eMMC and USB block devices. `mass-storage-gadget` writes devices faster than the firmware-based `rpiboot` mechanism, and also provides a UART console to the device for debugging. `usbboot` also includes a number of https://github.com/raspberrypi/usbboot/blob/master/Readme.md#compute-module-4-extensions[extensions] that enable you to interact with the EEPROM bootloader on a Compute Module. === Update the Compute Module bootloader On Compute Modules with an EEPROM bootloader, ROM never runs `recovery.bin` from SD/eMMC. These Compute Modules disable the `rpi-eeprom-update` service by default, because eMMC is not removable and an invalid `recovery.bin` file could prevent the system from booting. You can override this behaviour with `self-update` mode. In `self-update` mode, you can update the bootloader from USB MSD or network boot. WARNING: `self-update` mode does not update the bootloader atomically. If a power failure occurs during an EEPROM update, you could corrupt the EEPROM. === Modify the bootloader configuration To modify the Compute Module EEPROM bootloader configuration: . Navigate to the `usbboot/recovery` directory. . If you require a specific bootloader release, replace `pieeprom.original.bin` with the equivalent from your bootloader release. . Edit the default `boot.conf` bootloader configuration file to define a xref:../computers/raspberry-pi.adoc#BOOT_ORDER[`BOOT_ORDER`]: * For network boot, use `BOOT_ORDER=0xf2`. * For SD/eMMC boot, use `BOOT_ORDER=0xf1`. * For USB boot failing over to eMMC, use `BOOT_ORDER=0xf15`. * For NVMe boot, use `BOOT_ORDER=0xf6`. . Run `./update-pieeprom.sh` to generate a new EEPROM image `pieeprom.bin` image file. . If you require EEPROM write-protection, add `eeprom_write_protect=1` to `/boot/firmware/config.txt`. * Once enabled in software, you can lock hardware write-protection by pulling the `EEPROM_nWP` pin low. . Run the following command to write the updated `pieeprom.bin` image to EEPROM: + [source,console] ---- $ ../rpiboot -d . ---- --- # Source: cm-emmc-flashing.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[flash-compute-module-emmc]] == Flash an image to a Compute Module TIP: To flash the same image to multiple Compute Modules, use the https://github.com/raspberrypi/rpi-sb-provisioner[Raspberry Pi Secure Boot Provisioner]. To customise an OS image to flash onto those devices, use https://github.com/RPi-Distro/pi-gen[pi-gen]. [[flashing-the-compute-module-emmc]] The Compute Module has an on-board eMMC device connected to the primary SD card interface. This guide explains how to flash (write) an operating system image to the eMMC storage of a single Compute Module. **Lite** variants of Compute Modules do not have on-board eMMC. Instead, follow the procedure to flash a storage device for other Raspberry Pi devices at xref:../computers/getting-started.adoc#installing-the-operating-system[Install an operating system]. === Prerequisites To flash the Compute Module eMMC, you need the following: * Another computer, referred to in this guide as the *host device*. You can use Linux (we recommend Raspberry Pi OS or Ubuntu), Windows 11, or macOS. * The Compute Module IO Board xref:compute-module.adoc#io-board-compatibility[that corresponds to your Compute Module model]. * A micro USB cable, or a USB-C cable for Compute Module models since CM5IO. TIP: In some cases, USB hubs can prevent the host device from recognising the Compute Module. If your host device does not recognise the Compute Module, try connecting the Compute Module directly to the host device. For more diagnostic tips, see https://github.com/raspberrypi/usbboot?tab=readme-ov-file#troubleshooting[the usbboot troubleshooting guide]. === Set up the IO Board To begin, physically set up your IO Board. This includes connecting the Compute Module and host device to the IO Board. [tabs] ====== Compute Module 5 IO Board:: + To set up the Compute Module 5 IO Board: + . Connect the Compute Module to the IO board. When connected, the Compute Module should lie flat. . Fit `nRPI_BOOT` to J2 (`disable eMMC Boot`) on the IO board jumper. . Connect a cable from USB-C slave port J11 on the IO board to the host device. Compute Module 4 IO Board:: + To set up the Compute Module 4 IO Board: + . Connect the Compute Module to the IO board. When connected, the Compute Module should lie flat. . Fit `nRPI_BOOT` to J2 (`disable eMMC Boot`) on the IO board jumper. . Connect a cable from micro USB slave port J11 on the IO board to the host device. Compute Module IO Board:: + To set up the Compute Module IO Board: + . Connect the Compute Module to the IO board. When connected, the Compute Module should lie parallel to the board, with the engagement clips firmly clicked into place. . Set J4 (`USB SLAVE BOOT ENABLE`) to 1-2 = (`USB BOOT ENABLED`) . Connect a cable from micro USB slave port J15 on the IO board to the host device. ====== === Set up the host device Next, let's set up software on the host device. TIP: For a host device, we recommend a Raspberry Pi 4 or newer running 64-bit Raspberry Pi OS. [tabs] ====== Linux:: + To set up software on a Linux host device: + . Run the following command to install `rpiboot` (or, alternatively, https://github.com/raspberrypi/usbboot[build `rpiboot` from source]): + [source,console] ---- $ sudo apt install rpiboot ---- . Connect the IO Board to power. . Then, run `rpiboot`: + [source,console] ---- $ sudo rpiboot ---- . After a few seconds, the Compute Module should appear as a mass storage device. Check the `/dev/` directory, likely `/dev/sda` or `/dev/sdb`, for the device. Alternatively, run `lsblk` and search for a device with a storage capacity that matches the capacity of your Compute Module. macOS:: + To set up software on a macOS host device: + . First, https://github.com/raspberrypi/usbboot?tab=readme-ov-file#macos[build `rpiboot` from source]. . Connect the IO Board to power. . Then, run the `rpiboot` executable with the following command: + [source,console] ---- $ rpiboot -d mass-storage-gadget64 ---- . When the command finishes running, you should see a message stating "The disk you inserted was not readable by this computer." Click **Ignore**. Your Compute Module should now appear as a mass storage device. Windows:: + To set up software on a Windows 11 host device: + . Download the https://github.com/raspberrypi/usbboot/raw/master/win32/rpiboot_setup.exe[Windows installer] or https://github.com/raspberrypi/usbboot[build `rpiboot` from source]. . Double-click on the installer to run it. This installs the drivers and boot tool. Do not close any driver installation windows which appear during the installation process. . Reboot . Connect the IO Board to power. Windows should discover the hardware and configure the required drivers. . On CM4 and later devices, select **Raspberry Pi - Mass Storage Gadget - 64-bit** from the start menu. After a few seconds, the Compute Module eMMC or NVMe will appear as USB mass storage devices. This also provides a debug console as a serial port gadget. . On CM3 and older devices, select **rpiboot**. Double-click on `RPiBoot.exe` to run it. After a few seconds, the Compute Module eMMC should appear as a USB mass storage device. ====== === Flash the eMMC You can use xref:../computers/getting-started.adoc#raspberry-pi-imager[Raspberry Pi Imager] to flash an operating system image to a Compute Module. Alternatively, use `dd` to write a raw OS image (such as xref:../computers/os.adoc#introduction[Raspberry Pi OS]) to your Compute Module. Run the following command, replacing `/dev/sdX` with the path to the mass storage device representation of your Compute Module and `raw_os_image.img` with the path to your raw OS image: [source,console] ---- $ sudo dd if=raw_os_image.img of=/dev/sdX bs=4MiB ---- Once the image has been written, disconnect and reconnect the Compute Module. You should now see two partitions (for Raspberry Pi OS): [source,console] ---- /dev/sdX <- Device /dev/sdX1 <- First partition (FAT) /dev/sdX2 <- Second partition (Linux filesystem) ---- You can mount the `/dev/sdX1` and `/dev/sdX2` partitions normally. === Boot from eMMC [tabs] ====== Compute Module 5 IO Board:: + Disconnect `nRPI_BOOT` from J2 (`disable eMMC Boot`) on the IO board jumper. Compute Module 4 IO Board:: + Disconnect `nRPI_BOOT` from J2 (`disable eMMC Boot`) on the IO board jumper. Compute Module IO Board:: + Set J4 (`USB SLAVE BOOT ENABLE`) to 2-3 (`USB BOOT DISABLED`). ====== ==== Boot Disconnect the USB slave port. Power-cycle the IO board to boot the Compute Module from the new image you just wrote to eMMC. === Known issues * A small percentage of CM3 devices may experience problems booting. We have traced these back to the method used to create the FAT32 partition; we believe the problem is due to a difference in timing between the CPU and eMMC. If you have trouble booting your CM3, create the partitions manually with the following commands: + [source,console] ---- $ sudo parted /dev/ (parted) mkpart primary fat32 4MiB 64MiB (parted) q $ sudo mkfs.vfat -F32 /dev/ $ sudo cp -r /* ---- * The CM1 bootloader returns a slightly incorrect USB packet to the host. Most USB hosts ignore it, but some USB ports don't work due to this bug. CM3 fixed this bug. --- # Source: cm-peri-sw-guide.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Wire peripherals This guide helps developers wire up peripherals to the Compute Module pins, and explains how to enable these peripherals in software. Most of the pins of the SoC, including the GPIO, two CSI camera interfaces, two DSI display interfaces, and HDMI are available for wiring. You can can usually leave unused pins disconnected. Compute Modules that come in the DDR2 SODIMM form factor are physically compatible with any DDR2 SODIMM socket. However, the pinout is **not** the same as SODIMM memory modules. To use a Compute Module, a user must design a motherboard that: * provides power to the Compute Module (3.3V and 1.8V at minimum) * connects the pins to the required peripherals for the user's application This guide first explains the boot process and how Device Tree describes attached hardware. Then, we'll explain how to attach an I2C and an SPI peripheral to an IO Board. Finally, we'll create the Device Tree files necessary to use both peripherals with Raspberry Pi OS. === BCM283x GPIOs BCM283x has three banks of general-purpose input/output (GPIO) pins: 28 pins on Bank 0, 18 pins on Bank 1, and 8 pins on Bank 2, for a total of 54 pins. These pins can be used as true GPIO pins: software can set them as inputs or outputs, read and/or set state, and use them as interrupts. They also can run alternate functions such as I2C, SPI, I2S, UART, SD card, and others. You can use Bank 0 or Bank 1 on any Compute Module. Don't use Bank 2: it controls eMMC, HDMI hot plug detect, and ACT LED/USB boot control. Use `pinctrl` to check the voltage and function of the GPIO pins to see if your Device Tree is working as expected. === BCM283x boot process BCM283x devices have a VideoCore GPU and Arm CPU cores. The GPU consists of a DSP processor and hardware accelerators for imaging, video encode and decode, 3D graphics, and image compositing. In BCM283x devices, the DSP core in the GPU boots first. It handles setup before booting up the main Arm processors. Raspberry Pi BCM283x devices have a three-stage boot process: * The GPU DSP comes out of reset and executes code from the small internal boot ROM. This code loads a second-stage bootloader via an external interface. This code first looks for a second-stage boot loader on the boot device called `bootcode.bin` on the boot partition. If no boot device is found or `bootcode.bin` is not found, the boot ROM waits in USB boot mode for a host to provide a second-stage boot loader (`usbbootcode.bin`). * The second-stage boot loader is responsible for setting up the LPDDR2 SDRAM interface and other critical system functions. Once set up, the second-stage boot loader loads and executes the main GPU firmware (`start.elf`). * `start.elf` handles additional system setup and boots up the Arm processor subsystem. It contains the GPU firmware. The GPU firmware first reads `dt-blob.bin` to determine initial GPIO pin states and GPU-specific interfaces and clocks, then parses `config.txt`. It then loads a model-specific Arm device tree file and any Device Tree overlays specified in `config.txt` before starting the Arm subsystem and passing the Device Tree data to the booting Linux kernel. === Device Tree xref:configuration.adoc#device-trees-overlays-and-parameters[Linux Device Tree for Raspberry Pi] encodes information about hardware attached to a system as well as the drivers used to communicate with that hardware. The boot partition contains several binary Device Tree (`.dtb`) files. The Device Tree compiler creates these binary files using human-readable Device Tree descriptions (`.dts`). The boot partition contains two different types of Device Tree files. One is used by the GPU only; the rest are standard Arm Device Tree files for each of the BCM283x-based Raspberry Pi products: * `dt-blob.bin` (used by the GPU) * `bcm2708-rpi-b.dtb` (Used for Raspberry Pi 1 Models A and B) * `bcm2708-rpi-b-plus.dtb` (Used for Raspberry Pi 1 Models B+ and A+) * `bcm2709-rpi-2-b.dtb` (Used for Raspberry Pi 2 Model B) * `bcm2710-rpi-3-b.dtb` (Used for Raspberry Pi 3 Model B) * `bcm2708-rpi-cm.dtb` (Used for Raspberry Pi Compute Module 1) * `bcm2710-rpi-cm3.dtb` (Used for Raspberry Pi Compute Module 3) During boot, the user can specify a specific Arm Device Tree to use via the `device_tree` parameter in `config.txt`. For example, the line `device_tree=mydt.dtb` in `config.txt` specifies an Arm Device Tree in a file named `mydt.dtb`. You can create a full Device Tree for a Compute Module product, but we recommend using **overlays** instead. Overlays add descriptions of non-board-specific hardware to the base Device Tree. This includes GPIO pins used and their function, as well as the devices attached, so that the correct drivers can be loaded. The bootloader merges overlays with the base Device Tree before passing the Device Tree to the Linux kernel. Occasionally the base Device Tree changes, usually in a way that will not break overlays. Use the `dtoverlay` parameter in `config.txt` to load Device Tree overlays. Raspberry Pi OS assumes that all overlays are located in the `/overlays` directory and use the suffix `-overlay.dtb`. For example, the line `dtoverlay=myoverlay` loads the overlay `/overlays/myoverlay-overlay.dtb`. To wire peripherals to a Compute Module, describe all hardware attached to the Bank 0 and Bank 1 GPIOs in an overlay. This allows you to use standard Raspberry Pi OS images, since the overlay is merged into the standard base Device Tree. Alternatively, you can define a custom Device Tree for your application, but you won't be able to use standard Raspberry Pi OS images. Instead, you must create a modified Raspberry Pi OS image that includes your custom device tree for every OS update you wish to distribute. If the base overlay changes, you might need to update your customised Device Tree. === `dt-blob.bin` When `start.elf` runs, it first reads `dt-blob.bin`. This is a special form of Device Tree blob which tells the GPU how to set up the GPIO pin states. `dt-blob.bin` contains information about GPIOs and peripherals controlled by the GPU, instead of the SoC. For example, the GPU manages Camera Modules. The GPU needs exclusive access to an I2C interface and a couple of pins to talk to a Camera Module. On most Raspberry Pi models, I2C0 is reserved for exclusive GPU use. `dt-blob.bin` defines the GPIO pins used for I2C0. By default, `dt-blob.bin` does not exist. Instead, `start.elf` includes a built-in version of the file. Many Compute Module projects provide a custom `dt-blob.bin` which overrides the default built-in file. `dt-blob.bin` specifies: * the pin used for HDMI hot plug detect * GPIO pins used as a GPCLK output * an ACT LED that the GPU can use while booting https://datasheets.raspberrypi.com/cm/minimal-cm-dt-blob.dts[`minimal-cm-dt-blob.dts`] is an example `.dts` device tree file. It sets up HDMI hot plug detection, an ACT LED, and sets all other GPIOs as inputs with default pulls. To compile `minimal-cm-dt-blob.dts` to `dt-blob.bin`, use the xref:configuration.adoc#device-trees-overlays-and-parameters[Device Tree compiler] `dtc`. To install `dtc` on a Raspberry Pi, run the following command: [source,console] ---- $ sudo apt install device-tree-compiler ---- Then, run the follow command to compile `minimal-cm-dt-blob.dts` into `dt-blob.bin`: [source,console] ---- $ dtc -I dts -O dtb -o dt-blob.bin minimal-cm-dt-blob.dts ---- For more information, see our xref:configuration.adoc#change-the-default-pin-configuration[guide to creating `dt-blob.bin`]. === Arm Linux Device Tree After `start.elf` reads `dt-blob.bin` and sets up the initial pin states and clocks, it reads xref:config_txt.adoc[`config.txt`], which contains many other options for system setup. After reading `config.txt`, `start.elf` reads a model-specific Device Tree file. For instance, Compute Module 3 uses `bcm2710-rpi-cm.dtb`. This file is a standard Arm Linux Device Tree file that details hardware attached to the processor. It enumerates: * what and where peripheral devices exist * which GPIOs are used * what functions those GPIOs have * what physical devices are connected This file sets up the GPIOs by overwriting the pin state in `dt-blob.bin` if it is different. It will also try to load drivers for the specific devices. The model-specific Device Tree file contains disabled entries for peripherals. It contains no GPIO pin definitions other than the eMMC/SD Card peripheral which has GPIO defs and always uses the same pins. === Device Tree source and compilation The Raspberry Pi OS image provides compiled `dtb` files, but the source `dts` files live in the https://github.com/raspberrypi/linux/tree/rpi-6.6.y/arch/arm/boot/dts/broadcom[Raspberry Pi Linux kernel branch]. Look for `rpi` in the file names. Default overlay `dts` files live at https://github.com/raspberrypi/linux/tree/rpi-6.6.y/arch/arm/boot/dts/overlays[`arch/arm/boot/dts/overlays`]. These overlay files are a good starting point for creating your own overlays. To compile these `dts` files to `dtb` files, use the xref:configuration.adoc#device-trees-overlays-and-parameters[Device Tree compiler] `dtc`. When building your own kernel, the build host requires the Device Tree compiler in `scripts/dtc`. To build your overlays automatically, add them to the `dtbs` make target in `arch/arm/boot/dts/overlays/Makefile`. === Device Tree debugging When booting the Linux kernel, the GPU provides a fully assembled Device Tree created using the base `dts` and any overlays. This full tree is available via the Linux `proc` interface in `/proc/device-tree`. Nodes become directories and properties become files. You can use `dtc` to write this out as a human readable `dts` file for debugging. To see the fully assembled device tree, run the following command: [source,console] ---- $ dtc -I fs -O dts -o proc-dt.dts /proc/device-tree ---- `pinctrl` provides the status of the GPIO pins. If something seems to be going awry, try dumping the GPU log messages: [source,console] ---- $ sudo vclog --msg ---- TIP: To include even more diagnostics in the output, add `dtdebug=1` to `config.txt`. Use the https://forums.raspberrypi.com/viewforum.php?f=107[Device Tree Raspberry Pi forum] to ask Device Tree-related questions or report an issue. === Examples The following examples use an IO Board with peripherals attached via jumper wires. We assume a CM1+CMIO or CM3+CMIO3, running a clean install of Raspberry Pi OS Lite. The examples here require internet connectivity, so we recommend a USB hub, keyboard, and wireless LAN or Ethernet dongle plugged into the IO Board USB port. ==== Attach an I2C RTC to Bank 1 pins In this example, we wire an NXP PCF8523 real time clock (RTC) to the IO Board Bank 1 GPIO pins: 3V3, GND, I2C1_SDA on GPIO44 and I2C1_SCL on GPIO45. Download https://datasheets.raspberrypi.com/cm/minimal-cm-dt-blob.dts[`minimal-cm-dt-blob.dts`] and copy it to the boot partition in `/boot/firmware/`. Edit `minimal-cm-dt-blob.dts` and change the pin states of GPIO44 and 45 to be I2C1 with pull-ups: [source,console] ---- $ sudo nano /boot/firmware/minimal-cm-dt-blob.dts ---- Replace the following lines: [source,kotlin] ---- pin@p44 { function = "input"; termination = "pull_down"; }; // DEFAULT STATE WAS INPUT NO PULL pin@p45 { function = "input"; termination = "pull_down"; }; // DEFAULT STATE WAS INPUT NO PULL ---- With the following pull-up definitions: [source,kotlin] ---- pin@p44 { function = "i2c1"; termination = "pull_up"; }; // SDA1 pin@p45 { function = "i2c1"; termination = "pull_up"; }; // SCL1 ---- We could use this `dt-blob.dts` with no changes, because the Linux Device Tree re-configures these pins during Linux kernel boot when the specific drivers load. However, if you configure `dt-blob.dts`, the GPIOs reach their final state as soon as possible during the GPU boot stage. In some cases, pins must be configured at GPU boot time so they are in a specific state when Linux drivers are loaded. For example, a reset line may need to be held in the correct orientation. Run the following command to compile `dt-blob.bin`: [source,console] ---- $ sudo dtc -I dts -O dtb -o /boot/firmware/dt-blob.bin /boot/firmware/minimal-cm-dt-blob.dts ---- Download https://datasheets.raspberrypi.com/cm/example1-overlay.dts[`example1-overlay.dts`], copy it to the boot partition in `/boot/firmware/`, then compile it with the following command: [source,console] ---- $ sudo dtc -@ -I dts -O dtb -o /boot/firmware/overlays/example1.dtbo /boot/firmware/example1-overlay.dts ---- The `-@` flag compiles `dts` files with external references. It is usually necessary. Add the following line to xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`]: [source,ini] ---- dtoverlay=example1 ---- Finally, reboot with `sudo reboot`. Once rebooted, you should see an `rtc0` entry in `/dev`. Run the following command to view the hardware clock time: [source,console] ---- $ sudo hwclock ---- ==== Attach an ENC28J60 SPI Ethernet controller on Bank 0 In this example, we use an overlay already defined in `/boot/firmware/overlays` to add an ENC28J60 SPI Ethernet controller to Bank 0. The Ethernet controller uses SPI pins CE0, MISO, MOSI and SCLK (GPIO8-11 respectively), GPIO25 for a falling edge interrupt, in addition to GND and 3.3V. In this example, we won't change `dt-blob.bin`. Instead, add the following line to `/boot/firmware/config.txt`: [source,ini] ---- dtoverlay=enc28j60 ---- Reboot with `sudo reboot`. If you now run `ifconfig` you should see an additional `eth` entry for the ENC28J60 NIC. You should also have Ethernet connectivity. Run the following command to test your connectivity: [source,console] ---- $ ping 8.8.8.8 ---- Run the following command to show GPIO functions; GPIO8-11 should now provide ALT0 (SPI) functions: [source,console] ---- $ pinctrl ---- --- # Source: cmio-camera.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Attach a Camera Module The Compute Module has two CSI-2 camera interfaces: CAM1 and CAM0. This section explains how to connect one or two Raspberry Pi Cameras to a Compute Module using the CAM1 and CAM0 interfaces with a Compute Module I/O Board. === Update your system Before configuring a camera, xref:../computers/raspberry-pi.adoc#update-the-bootloader-configuration[ensure that your Raspberry Pi firmware is up-to-date].: [source,console] ---- $ sudo apt update $ sudo apt full-upgrade ---- === Connect one camera To connect a single camera to a Compute Module, complete the following steps: . Disconnect the Compute Module from power. . Connect the Camera Module to the CAM1 port using a RPI-CAMERA board or a Raspberry Pi Zero camera cable. + image::images/CMIO-Cam-Adapter.jpg[alt="Connecting the adapter board", width="60%"] . _(CM5 only)_: Fit two jumpers on J6 per the board's written instructions. + image::images/cm5io-j6-cam1.png[alt="Jumpers on J6 for CAM1", width="60%"] . _(CM1, CM3, CM3+, and CM4S only)_: Connect the following GPIO pins with jumper cables: * `0` to `CD1_SDA` * `1` to `CD1_SCL` * `2` to `CAM1_I01` * `3` to `CAM1_I00` + image::images/CMIO-Cam-GPIO.jpg[alt="GPIO connection for a single camera", width="60%"] . Reconnect the Compute Module to power. . Remove (or comment out with the prefix `#`) the following lines, if they exist, in `/boot/firmware/config.txt`: + [source,ini] ---- camera_auto_detect=1 ---- + [source,ini] ---- dtparam=i2c_arm=on ---- . _(CM1, CM3, CM3+, and CM4S only)_: Add the following directive to `/boot/firmware/config.txt` to accommodate the swapped GPIO pin assignment on the I/O board: + [source,ini] ---- dtoverlay=cm-swap-i2c0 ---- . _(CM1, CM3, CM3+, and CM4S only)_: Add the following directive to `/boot/firmware/config.txt` to assign GPIO 3 as the CAM1 regulator: + [source,ini] ---- dtparam=cam1_reg ---- . Add the appropriate directive to `/boot/firmware/config.txt` to manually configure the driver for your camera model: + [%header,cols="1,1"] |=== | camera model | directive | v1 camera | `dtoverlay=ov5647` | v2 camera | `dtoverlay=imx219` | v3 camera | `dtoverlay=imx708` | HQ camera | `dtoverlay=imx477` | GS camera | `dtoverlay=imx296` |=== . Reboot your Compute Module with `sudo reboot`. . Run the following command to check the list of detected cameras: + [source,console] ---- $ rpicam-hello --list ---- You should see your camera model, referred to by the driver directive in the table above, in the output. === Connect two cameras To connect two cameras to a Compute Module, complete the following steps: . Follow the single camera instructions above. . Disconnect the Compute Module from power. . Connect the Camera Module to the CAM0 port using a RPI-CAMERA board or a Raspberry Pi Zero camera cable. + image::images/CMIO-Cam-Adapter.jpg[alt="Connect the adapter board", width="60%"] . _(CM1, CM3, CM3+, and CM4S only)_: Connect the following GPIO pins with jumper cables: * `28` to `CD0_SDA` * `29` to `CD0_SCL` * `30` to `CAM0_I01` * `31` to `CAM0_I00` + image:images/CMIO-Cam-GPIO2.jpg[alt="GPIO connection with additional camera", width="60%"] . _(CM4)_: Connect the J6 GPIO pins with two vertical-orientation jumpers. + image:images/j6_vertical.jpg[alt="Connect the J6 GPIO pins in vertical orientation", width="60%"] . Reconnect the Compute Module to power. . _(CM1, CM3, CM3+, and CM4S only)_: Add the following directive to `/boot/firmware/config.txt` to assign GPIO 31 as the CAM0 regulator: + [source,ini] ---- dtparam=cam0_reg ---- . Add the appropriate directive to `/boot/firmware/config.txt` to manually configure the driver for your camera model: + [%header,cols="1,1"] |=== | camera model | directive | v1 camera | `dtoverlay=ov5647,cam0` | v2 camera | `dtoverlay=imx219,cam0` | v3 camera | `dtoverlay=imx708,cam0` | HQ camera | `dtoverlay=imx477,cam0` | GS camera | `dtoverlay=imx296,cam0` |=== . Reboot your Compute Module with `sudo reboot`. . Run the following command to check the list of detected cameras: + [source,console] ---- $ rpicam-hello --list ---- + You should see both camera models, referred to by the driver directives in the table above, in the output. === Software Raspberry Pi OS includes the `libcamera` library to help you take images with your Raspberry Pi. ==== Take a picture Use the following command to immediately take a picture and save it to a file in PNG encoding using the `MMDDhhmmss` date format as a filename: [source,console] ---- $ rpicam-still --datetime -e png ---- Use the `-t` option to add a delay in milliseconds. Use the `--width` and `--height` options to specify a width and height for the image. ==== Take a video Use the following command to immediately start recording a ten-second long video and save it to a file with the h264 codec named `video.h264`: [source,console] ---- $ rpicam-vid -t 10000 -o video.h264 ---- ==== Specify which camera to use By default, `libcamera` always uses the camera with index `0` in the `--list-cameras` list. To specify a camera option, get an index value for each camera from the following command: [source,console] ---- $ rpicam-hello --list-cameras Available cameras ----------------- 0 : imx477 [4056x3040] (/base/soc/i2c0mux/i2c@1/imx477@1a) Modes: 'SRGGB10_CSI2P' : 1332x990 [120.05 fps - (696, 528)/2664x1980 crop] 'SRGGB12_CSI2P' : 2028x1080 [50.03 fps - (0, 440)/4056x2160 crop] 2028x1520 [40.01 fps - (0, 0)/4056x3040 crop] 4056x3040 [10.00 fps - (0, 0)/4056x3040 crop] 1 : imx708 [4608x2592] (/base/soc/i2c0mux/i2c@0/imx708@1a) Modes: 'SRGGB10_CSI2P' : 1536x864 [120.13 fps - (768, 432)/3072x1728 crop] 2304x1296 [56.03 fps - (0, 0)/4608x2592 crop] 4608x2592 [14.35 fps - (0, 0)/4608x2592 crop] ---- In the above output: * `imx477` refers to a HQ camera with an index of `0` * `imx708` refers to a v3 camera with an index of `1` To use the HQ camera, pass its index (`0`) to the `--camera` `libcamera` option: [source,console] ---- $ rpicam-hello --camera 0 ---- To use the v3 camera, pass its index (`1`) to the `--camera` `libcamera` option: [source,console] ---- $ rpicam-hello --camera 1 ---- === I2C mapping of GPIO pins By default, the supplied camera drivers assume that CAM1 uses `i2c-10` and CAM0 uses `i2c-0`. Compute module I/O boards map the following GPIO pins to `i2c-10` and `i2c-0`: [%header,cols="1,1,1"] |=== | I/O Board Model | `i2c-10` pins | `i2c-0` pins | CM4 I/O Board | GPIOs 44,45 | GPIOs 0,1 | CM1, CM3, CM3+, CM4S I/O Board | GPIOs 0,1 | GPIOs 28,29 |=== To connect a camera to the CM1, CM3, CM3+ and CM4S I/O Board, add the following directive to `/boot/firmware/config.txt` to accommodate the swapped pin assignment: [source,ini] ---- dtoverlay=cm-swap-i2c0 ---- Alternative boards may use other pin assignments. Check the documentation for your board and use the following alternate overrides depending on your layout: [%header,cols="1,1"] |=== | Swap | Override | Use GPIOs 0,1 for i2c0 | `i2c0-gpio0` | Use GPIOs 28,29 for i2c0 (default) | `i2c0-gpio28` | Use GPIOs 44&45 for i2c0 | `i2c0-gpio44` | Use GPIOs 0&1 for i2c10 (default) | `i2c10-gpio0` | Use GPIOs 28&29 for i2c10 | `i2c10-gpio28` | Use GPIOs 44&45 for i2c10 | `i2c10-gpio44` |=== ==== GPIO pins for shutdown For camera shutdown, Device Tree uses the pins assigned by the `cam1_reg` and `cam0_reg` overlays. The CM4 IO board provides a single GPIO pin for both aliases, so both cameras share the same regulator. The CM1, CM3, CM3+, and CM4S I/O boards provides no GPIO pin for `cam1_reg` and `cam0_reg`, so the regulators are disabled on those boards. However, you can enable them with the following directives in `/boot/firmware/config.txt`: * `dtparam=cam1_reg` * `dtparam=cam0_reg` To assign `cam1_reg` and `cam0_reg` to a specific pin on a custom board, use the following directives in `/boot/firmware/config.txt`: * `dtparam=cam1_reg_gpio=` * `dtparam=cam0_reg_gpio=` For example, to use pin 42 as the regulator for CAM1, add the directive `dtparam=cam1_reg_gpio=42` to `/boot/firmware/config.txt`. These directives only work for GPIO pins connected directly to the SoC, not for expander GPIO pins. --- # Source: cmio-display.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Attaching the Touch Display LCD panel Update your system software and firmware to the latest version before starting. Compute Modules mostly use the same process, but sometimes physical differences force changes for a particular model. === Connect a display to DISP1/DSI1 NOTE: The Raspberry Pi Zero camera cable can't be used as an alternative to the RPI-DISPLAY adapter. The two cables have distinct wiring. To connect a display to `DISP1/DSI1`: . Disconnect the Compute Module from power. . Connect the display to the `DISP1/DSI1` port on the Compute Module IO board through the 22 W to 15 W display adapter. . Complete the appropriate jumper connections: - For *CM1*, *CM3*, *CM3+*, and *CM4S*, connect the following GPIO pins with jumper cables: * `0` to `CD1_SDA` * `1` to `CD1_SCL` - For *CM5*, on the Compute Module 5 IO board, add the appropriate jumpers to J6, as indicated on the silkscreen. . Reconnect the Compute Module to power. . Add `dtoverlay=vc4-kms-dsi-7inch` to xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`]. . Reboot your Compute Module with `sudo reboot`. Your device should detect and begin displaying output to your display. === Connect a display to DISP0/DSI0 To connect a display to `DISP0/DSI0` on CM1, CM3, and CM4 IO boards: . Connect the display to the `DISP0/DSI0` port on the Compute Module IO board through the 22 W to 15 W display adapter. . Complete the appropriate jumper connections: - For *CM1*, *CM3*, *CM3+*, and *CM4S*, connect the following GPIO pins with jumper cables: * `28` to `CD0_SDA` * `29` to `CD0_SCL` - For *CM4*, on the Compute Module 4 IO board, add the appropriate jumpers to J6, as indicated on the silkscreen. . Reconnect the Compute Module to power. . Add `dtoverlay=vc4-kms-dsi-7inch,dsi0` to `/boot/firmware/config.txt`. . Reboot your Compute Module with `sudo reboot`. Your device should detect and begin displaying output to your display. === Disable touchscreen The touchscreen requires no additional configuration. Connect it to your Compute Module; both the touchscreen element and display work when successfully detected. To disable the touchscreen element, but still use the display, add the following line to `/boot/firmware/config.txt`: [source,ini] ---- disable_touchscreen=1 ---- === Disable display To entirely ignore the display when connected, add the following line to `/boot/firmware/config.txt`: [source,ini] ---- ignore_lcd=1 ---- == Attaching the Touch Display 2 LCD panel Touch Display 2 is an LCD display designed for Raspberry Pi devices (see https://www.raspberrypi.com/products/touch-display-2/). It's available in two sizes: 5 inches or 7 inches (diagonally). For more information about these options, see *Specifications* in xref:../accessories/touch-display-2.adoc[Touch Display 2]. Regardless of the size that you use, Touch Display 2 connects in the same way as the original Touch Display, but the software setup on Compute Modules is slightly different because it uses a different display driver. For connection details, see *Connectors* in xref:../accessories/touch-display-2.adoc[Touch Display 2]. To enable Touch Display 2 on `DISP1/DSI1`, edit the `/boot/firmware/config.txt` file to add the following. You must also add jumpers to J6 as indicated on the silkscreen. - For the *5-inch* display: `dtoverlay=vc4-kms-dsi-ili9881-5inch` - For the *7-inch* display: `dtoverlay=vc4-kms-dsi-ili9881-7inch` To use `DISP0/DSI0`, append `,dsi0` to the overlay name. - For the *5-inch* display: `dtoverlay=vc4-kms-dsi-ili9881-5inch,dsi0` - For the *7-inch* display: `dtoverlay=vc4-kms-dsi-ili9881-7inch,dsi0` --- # Source: datasheet.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Specifications === Compute Module 5 datasheet To learn more about Compute Module 5 (CM5) and its corresponding IO Board, see the following documents: * https://datasheets.raspberrypi.com/cm5/cm5-datasheet.pdf[CM5 datasheet] * https://rpltd.co/cm5-design-files[CM5 design files] === Compute Module 5 IO Board datasheet Design data for the Compute Module 5 IO Board (CM5IO) can be found in its datasheet: * https://datasheets.raspberrypi.com/cm5/cm5io-datasheet.pdf[CM5IO datasheet] * https://rpltd.co/cm5io-design-files[CM5IO design files] === Compute Module 4 datasheet To learn more about Compute Module 4 (CM4) and its corresponding IO Board, see the following documents: * https://datasheets.raspberrypi.com/cm4/cm4-datasheet.pdf[CM4 datasheet] [.whitepaper, title="Configure the Compute Module 4", subtitle="", link=https://pip.raspberrypi.com/documents/RP-003470-WP-Configuring-the-Compute-Module-4.pdf] **** The Compute Module 4 is available in a number of different hardware configurations. Some use cases disable certain features that aren't required. This document describes how to disable various hardware and software interfaces. **** === Compute Module 4 IO Board datasheet Design data for the Compute Module 4 IO Board (CM4IO) can be found in its datasheet: * https://datasheets.raspberrypi.com/cm4io/cm4io-datasheet.pdf[CM4IO datasheet] We also provide a KiCad PCB design set for the CM4 IO Board: * https://datasheets.raspberrypi.com/cm4io/CM4IO-KiCAD.zip[CM4IO KiCad files] === Compute Module 4S datasheet Compute Module 4S (CM4S) offers the internals of CM4 in the DDR2-SODIMM form factor of CM1, CM3, and CM3+. To learn more about CM4S, see the following documents: * https://datasheets.raspberrypi.com/cm4s/cm4s-datasheet.pdf[CM4S datasheet] === Compute Module 3+ datasheet Compute Module 3+ (CM3+) is a supported product with an end-of-life (EOL) date no earlier than January 2028. To learn more about CM3+ and its corresponding IO Board, see the following documents: * https://datasheets.raspberrypi.com/cm/cm3-plus-datasheet.pdf[CM3+ datasheet] === Compute Module 1 and Compute Module 3 datasheet Raspberry Pi Compute Module 1 (CM1) and Compute Module 3 (CM3) are supported products with an end-of-life (EOL) date no earlier than January 2026. To learn more about CM1 and CM3, see the following documents: * https://datasheets.raspberrypi.com/cm/cm1-and-cm3-datasheet.pdf[CM1 and CM3 datasheet] * https://datasheets.raspberrypi.com/cm/cm1-schematics.pdf[Schematics for CM1] * https://datasheets.raspberrypi.com/cm/cm3-schematics.pdf[Schematics for CM3] [.whitepaper, title="Transition from Compute Module 1 or Compute Module 3 to Compute Module 4", subtitle="", link=https://pip.raspberrypi.com/documents/RP-003469-WP-Transitioning-from-CM3-to-CM4.pdf] **** This white paper helps developers migrate from Compute Module 1 or Compute Module 3 to Compute Module 4. **** === Compute Module IO Board schematics The Compute Module IO Board (CMIO) provides a variety of interfaces for CM1, CM3, CM3+, and CM4S. The Compute Module IO Board comes in two variants: Version 1 and Version 3. Version 1 is only compatible with CM1. Version 3 is compatible with CM1, CM3, CM3+, and CM4S. Compute Module IO Board Version 3 is sometimes written as the shorthand CMIO3. To learn more about CMIO1 and CMIO3, see the following documents: * https://datasheets.raspberrypi.com/cmio/cmio-schematics.pdf[Schematics for CMIO] * https://datasheets.raspberrypi.com/cmio/RPi-CMIO-R1P2.zip[Design documents for CMIO Version 1.2 (CMIO/CMIO1)] * https://datasheets.raspberrypi.com/cmio/RPi-CMIO-R3P0.zip[Design documents for CMIO Version 3.0 (CMIO3)] === Compute Module Camera/Display Adapter Board schematics The Compute Module Camera/Display Adapter Board (CMCDA) provides camera and display interfaces for Compute Modules. To learn more about the CMCDA, see the following documents: * https://datasheets.raspberrypi.com/cmcda/cmcda-schematics.pdf[Schematics for the CMCDA] * https://datasheets.raspberrypi.com/cmcda/RPi-CMCDA-1P1.zip[Design documents for CMCDA Version 1.1] === Under-voltage detection The following schematic describes an under-voltage detection circuit, as used in older models of Raspberry Pi: image::images/under_voltage_detect.png[Under-voltage detect] --- # Source: introduction.adoc *Note: This file could not be automatically converted from AsciiDoc.* A Raspberry Pi *Compute Module (CM)* is a compact version of a standard Raspberry Pi single-board computer (SBC) designed primarily for embedded and industrial applications. A Compute Module contains the core components of a Raspberry Pi but without the standard connectors like HDMI, USB, or Ethernet. A Raspberry Pi *Compute Module IO Board (CMIO)* provides the physical connectors, peripheral interfaces, and expansion options necessary for accessing and expanding a Compute Module's functionality. A Compute Module IO Board can be used as a standalone product, allowing for rapid prototyping and embedded systems development, or as a reference design for your own carrier (IO) board. In either case, you can selectively make use of only the connectors that your application requires. This page: * Summarises the available Raspberry Pi Compute Module and IO Board models, including information about their compatibility and key features. * Describes the accessories available for Compute Module 5 (CM5) and its IO Board (CM5IO). * Explains how to flash and boot Raspberry Pi Compute Modules. * Explains how to configure the EEPROM bootloader of a Compute Module. * Explains how to wire and enable peripherals like cameras and displays using Device Tree and overlays. * Provides links to datasheets, schematics, and design resources. == Compute Modules Raspberry Pi Compute Modules are *system-on-module (SoM)* variants of the flagship Raspberry Pi single-board computers (SBC). They're designed for industrial and commercial applications, such as digital signage, thin clients, and process automation. Many developers and system designers choose Compute Modules over flagship Raspberry Pi models for their compact design, flexibility, and support for on-board eMMC storage. === Memory, storage, and wireless variants Raspberry Pi Compute Modules are available in several variants, differing in memory, embedded Multi-Media Card (eMMC) flash storage capacity (soldered onto the board), and wireless connectivity (Wi-Fi and Bluetooth). * *Memory.* Compute Modules 1, 3, and 3+ offer a fixed amount of RAM. Compute Modules 4, 4S, and 5 offer different amounts of RAM; for details about the available options, see the dedicated sections for each Compute Module model on this page. * *Storage.* Compute Modules 3, 3+, 4, 4S, and 5 offer different storage options, with later models offering more options and larger sizes than earlier models. Compute Module 1 offers a fixed 4 GB of storage. Storage is provided by eMMC flash memory, which provides persistent storage with low power consumption and built-in features that improve reliability. Variants with no on-board storage are referred to with the suffix *Lite* or *L*, for example, "CM5Lite" or "CM3L". * *Wireless.* Compute Modules 4 and 5 offer optional Wi-Fi and Bluetooth. === Models The following table summarises Raspberry Pi Compute Modules in reverse chronological order, listing their SoC, GPU, CPU, and form factor for quick reference. For more information about each of these models, including memory and storage options, see the following dedicated sections on this page. [cols="1,1,1,1,1,1", options="header"] |=== |Model|Based on|SoC|GPU|CPU|Form factor | <> (2024) | Raspberry Pi 5 | Broadcom BCM2712 |VideoCore VII | 4 × Cortex-A76 at 2.4 GHz |Dual 100-pin connectors | <> (2022) | Raspberry Pi 4 Model B (in CM3 form factor) | Broadcom BCM2711 |VideoCore VI | 4 × Cortex-A72 at 1.5 GHz |DDR2 SODIMM | <> (2020) | Raspberry Pi 4 Model B | Broadcom BCM2711 |VideoCore VI | 4 × Cortex-A72 at 1.5 GHz |Dual 100-pin connectors | <> (2019) | Raspberry Pi 3 Model B+ | Broadcom BCM2837B0 |VideoCore IV | 4 × Cortex-A53 at 1.2 GHz |DDR2 SODIMM | <> (2017; discontinued October 2025) | Raspberry Pi 3 Model B | Broadcom BCM2837 |VideoCore IV | 4 × Cortex-A53 at 1.2 GHz |DDR2 SODIMM | <> (2014) | Raspberry Pi Model B | Broadcom BCM2835 |VideoCore IV | 1 × ARM1176JZF-S at 700 MHz |DDR2 SODIMM |=== [[cm5]] === Compute Module 5 .Compute Module 5 image::images/cm5.png[alt="Compute Module 5", width="60%"] Compute Module 5 (*CM5*) combines the core components of Raspberry Pi 5 with optional flash storage. Key features include: * *Processor.* Broadcom BCM2712. * *Memory options.* 2 GB, 4 GB, 8 GB, or 16 GB of RAM. * *Storage options.* 0 GB (*CM5Lite*), 16 GB, 32 GB, or 64 GB of eMMC flash memory. * *Form factor.* Two 100-pin high-density connectors for connecting to the companion carrier board. CM5 uses the same form factor as *CM4* and provides input/output (I/O) interfaces beyond those available on standard Raspberry Pi boards, offering expanded options for more complex systems and designs. [[cm4s]] === Compute Module 4S .Compute Module 4S image::images/cm4s.jpg[alt="Compute Module 4S", width="60%"] Compute Module 4S (*CM4S*) combines the core components of Raspberry Pi 4 with optional flash storage. Key features include: * *Processor.* Broadcom BCM2711. * *Memory options.* 1 GB, 2 GB, 4 GB, or 8 GB of RAM. * *Storage options.* 0 GB (*CM4SLite*), 8 GB, 16 GB, or 32 GB of eMMC flash memory. * *Form factor.* Standard DDR2 SODIMM module. Unlike *CM4*, CM4S retains the DDR2 SODIMM form factor used in *CM1*, *CM3*, and *CM3+*. [[cm4]] === Compute Module 4 .Compute Module 4 image::images/cm4.jpg[alt="Compute Module 4", width="60%"] Compute Module 4 (*CM4*) combines the core components of Raspberry Pi 4 with optional flash storage. Key features include: * *Processor.* Broadcom BCM2711. * *Memory options.* 1 GB, 2 GB, 4 GB, or 8 GB of RAM. * *Storage options.* 0 GB (*CM4Lite*), 8 GB, 16 GB, or 32 GB of eMMC flash memory. * *Form factor.* Two 100-pin high-density connectors for connecting to the companion carrier board. * *Temperature range options.* Operating temperature of -20°C to +85°C for standard variants or -40°C to +85°C for wider applications. Unlike earlier modules (*CM1*, *CM3*, *CM3+*), CM4 moved away from the DDR2 SODIMM form factor to a dual 100-pin high-density connector layout, which results in a smaller physical footprint. This redesign supports the following additional features: * Dual HDMI connectors * PCIe support * Ethernet connector [[cm3plus]] === Compute Module 3+ .Compute Module 3+ image::images/cm3-plus.jpg[alt="Compute Module 3+", width="60%"] Compute Module 3+ (*CM3+*) combines the core components of Raspberry Pi 3 Model B+ with optional flash storage. Key features include: * *Processor.* Broadcom BCM2837B0. * *Memory*. 1 GB of RAM. * *Storage options.* 0 GB (*CM3+Lite*) or 8 GB, 16 GB, or 32 GB of eMMC flash memory. * *Form factor.* Standard DDR2 SODIMM module. [[cm3]] === Compute Module 3 .Compute Module 3 image::images/cm3.jpg[alt="Compute Module 3", width="60%"] IMPORTANT: Raspberry Pi Compute Module 3 (CM3) and Compute Module 3 Lite (CM3Lite) have reached End-of-Life (EoL) due to the discontinuation of the core SoC used in these products. The official EoL date was 16 October 2025. The closest equivalent to CM3 is Raspberry Pi <>, which offers the same mechanical footprint, improved thermal design, and a BCM2837B0 processor, and so is recommended for existing designs. For new designs requiring the SODIMM form factor, we recommend <>. For all other new designs, we recommend <> or <>. For more information, see the official https://pip.raspberrypi.com/documents/RP-009286-PC?disposition=inline[Obsolescence Notice]. Compute Module 3 (*CM3*) combines the core components of Raspberry Pi 3 with an optional 4 GB of flash storage. Key features include: * *Processor.* Broadcom BCM2837. * *Memory.* 1 GB of RAM. * *Storage options.* 0 GB (*CM3Lite*) or 4 GB of eMMC flash memory. * *Form factor.* Standard DDR2 SODIMM module. [[cm1]] === Compute Module 1 .Compute Module 1 image::images/cm1.jpg[alt="Compute Module 1", width="60%"] Compute Module 1 (*CM1*) combines the core components of Raspberry Pi Model B with 4 GB of flash storage. Key features include: * *Processor.* Broadcom BCM2835. * *Memory.* 512 MB of RAM. * *Storage.* 4 GB of eMMC flash memory. * *Form factor.* Standard DDR2 SODIMM module. == IO Boards A Raspberry Pi Compute Module IO Board is the companion carrier board that provides the necessary connectors to interface with various input/output (I/O) peripherals on your Compute Module. Raspberry Pi Compute Module IO Boards provide the following functionality: * Supply power to the Compute Module. * Connect general-purpose input/output (GPIO) pins to standard pin headers so that you can attach sensors or electronics. * Make camera and display interfaces available through flat flexible cable (FFC) connectors. * Make HDMI signals available through HDMI connectors. * Make USB interfaces available through standard USB connectors for peripheral devices. * Provide LEDs that indicate power and activity status. * Enable eMMC programming over USB for flashing the module's on-board storage. * On CM4IO and CM5IO, expose PCIe through connectors so that you can attach storage or peripheral devices like SSDs or network adapters. Raspberry Pi IO Boards are general-purpose boards designed for development, testing, and prototyping Compute Modules. For production use, you might design a smaller, custom carrier board that includes only the connectors you need for your use case. [[io-board-compatibility]] === IO Boards and compatibility Not all IO Boards work with all Compute Module models. The following table summarises Raspberry Pi Compute Module IO Boards in reverse chronological order, listing their compatible Compute Modules (which include Lite versions), power input, and size. For more information about each of these boards, including available interfaces, see the following dedicated sections on this page. [cols="1,1,1,1", options="header"] |=== |IO Board|Compatible CM|Power input|Size | <> (2024) | <>; CM4 with reduced functionality | 5 V through USB Type-C |160 mm × 90 mm | <> (2020) | <>; CM5 with reduced functionality | 5 V through the GPIO header or 12 V through the DC barrel jack |160 mm × 90 mm | <> (2017) | <>, <>, <>, and <> | 5 V through GPIO or a micro USB connector | 85 mm × 105 mm | <> (2014) | <> | 5 V through GPIO or a micro USB connector. | 85 mm × 105 mm |=== [[cm5io]] === Compute Module 5 IO Board .Compute Module 5 IO Board image::images/cm5io.png[alt="Compute Module 5 IO Board", width="60%"] The Compute Module 5 IO Board (CM5IO) provides the following: * *Power and control connectors.* ** USB-C power using the same standard as Raspberry Pi 5: 5 V at 5 A (25 W) or 5 V at 3 A (15 W) with a 600 mA peripheral limit. ** A power button for CM5. ** Real-time clock (RTC) battery socket. * *Video and display connectors.* ** Two HDMI connectors. ** Two MIPI DSI/CSI-2 combined display/camera FPC connectors (22-pin, 0.5 mm pitch cable). * *Networking and connectivity connectors.* ** Two USB 3.0 (Type-A) connectors for keyboards, storage, or peripherals. ** A USB 2.0 (Type-C) connector for flashing CM5 or additional peripherals. ** A Gigabit Ethernet RJ45 with PoE support. * *Expansion and storage options.* ** A M.2 M key PCIe socket compatible with the 2230, 2242, 2260, and 2280 form factors. ** A microSD card slot (only for use with *CM5Lite*, which has no eMMC; other variants ignore the slot). ** HAT footprint with 40-pin GPIO connector. ** PoE header. * *Configuration options.* ** Jumpers to disable features such as eMMC boot, EEPROM write, and wireless connectivity. ** Selectable 1.8 V or 3.3 V GPIO voltage. * *Fan connector.* A four-pin JST-SH PWM fan connector. [[cm4io]] === Compute Module 4 IO Board .Compute Module 4 IO Board image::images/cm4io.jpg[alt="Compute Module 4 IO Board", width="60%"] The Compute Module 4 IO Board (CM4IO) provides the following: * *Power and control connectors.* ** 5 V through the GPIO header or 12 V input through barrel jack; supports up to 26 V if PCIe is unused. ** Real-time clock (RTC) battery socket. * *Video and display connectors.* ** Two HDMI connectors. ** Two MIPI DSI display FPC connectors (22-pin, 0.5 mm pitch cable). ** Two MIPI CSI-2 camera FPC connectors (22-pin, 0.5 mm pitch cable). * *Networking and connectivity connectors.* ** Two USB 2.0 connectors. ** A micro USB upstream port. ** A Gigabit Ethernet RJ45 with PoE support. * *Expansion and storage options.* ** PCIe Gen 2 socket. ** A microSD card slot (only for use with *CM4Lite*, which has no eMMC; other variants ignore the slot). ** HAT footprint with 40-pin GPIO connector. ** PoE header. * *Configuration options.* ** Jumpers to disable features such as eMMC boot, EEPROM write, and wireless connectivity. ** Selectable 1.8 V or 3.3 V GPIO voltage. * *Fan connector.* Fan connector supporting standard 12 V fans with PWM drive. [[cmio]] === Compute Module IO Board (versions 1 and 3) .Compute Module IO Board (version 3) image::images/cmio.jpg[alt="Compute Module IO Board (version 3)", width="60%"] There are two variants of the Compute Module IO Board: * Version 1 (CMIO), compatible only with <>. * Version 3 (CMIO3), compatible with <>, <>, <>, and <>. This version adds a microSD card slot that doesn't exist on CMIO (version 1). The Compute Module IO Board (CMIO and CMIO3) provides the following: * *Power and control connectors.* 5 V input through GPIO or a micro USB connector. * *Video and display connectors.* ** One Full size Type A HDMI. ** Two MIPI DSI display FPC connectors (22-pin, 0.5 mm pitch cable). ** Two MIPI CSI-2 camera FPC connectors (22-pin, 0.5 mm pitch cable). * *Networking and connectivity connectors.* One USB 2.0 Type-A connector. * *Expansion and storage options.* ** 46 GPIO pins. ** (CMIO3 only) A microSD card slot (only for use with *CM3Lite*, *CM3+Lite* and *CM4SLite*, which have no eMMC). == CM5 and CM5IO accessories Raspberry Pi offers the following accessories for CM5 and CM5IO: * <>, an enclosure for a CM5IO (and attached CM5). The case also optionally fits an antenna and cooler. * <>, a 2.4 GHz and 5 GHz antenna for wireless connectivity through CM4 or CM5. * <>, a passive heat sink to dissipate heat from CM5. [[case]] === CM5IO Case .Compute Module 5 IO Board Case (version 2) image::images/cm5io-case-v2.jpg[alt="Compute Module 5 IO Board Case", width="60%"] The Compute Module 5 IO Board (CM5IO) Case is a two-piece metal enclosure that, when assembled, provides physical protection for CM5IO with an attached CM5. The following features apply to the most recent iteration of the CM5IO Case (version 2): * Cut-outs for externally facing connectors and LEDs. * A pre-installed, controllable fan that you can remove. * An attachment point for a *Raspberry Pi Antenna Kit*. * Space for a *CM5 Cooler* alongside the pre-installed fan. * Space for accessories connected to the IO board, such as an M.2 SSD or PoE+ HAT+. The original version of the case doesn't provide the internal space for all the listed items simultaneously. For more information about the different versions, see <>. ==== Case specifications .Compute Module 5 IO Board Case ports image::images/cm5io-case-front.png[alt="the port selection on the Compute Module 5 IO Board Case", width="60%"] When assembled, the CM5IO Case measures approximately 170 mm × 94 mm × 28 mm. It's made of sheet metal and weighs approximately 350 g. For thermal management, the case includes a pre-installed fan that directs airflow over your CM5 and CM5IO components. You can remove or replace the fan depending on your cooling requirements. Depending on the case version, you can also optionally add a <> for improved thermal performance; the original case requires removing the fan first, while the updated version provides space for both the fan and cooler together. The following image depicts the physical dimensions of the CM5IO case in millimetres (mm). The size of the case is the same for both versions; the only difference is the placement of the fan. For information about the different versions, see <>. .CM5IO Case (version 2) physical specifications image::images/case-physical.png[alt="CM5IO Case physical specifications", width="80%"] [[versions]] ==== Case versions The are two iterations of the CM5IO case, differing in the placement of the pre-installed fan: version 1 and version 2. .Left: version 1 of the CM5IO Case; right: version 2 of the CM5IO Case image::images/case-versions.jpg[alt="case versions 1 and 2="80%"] The first version features a fan that's closer to the long edge of the enclosure. The internal layout and available clearance in this version doesn't allow for both the fan and the CM5 Cooler to be installed inside the case at the same time. If you want to install the CM5 Cooler into the case, you must remove the fan. The second version updates the internal layout such that the fan sits closer to the short edge of the enclosure. This revised layout provides sufficient space for both the fan and the CM5 Cooler without modification. For instructions on mounting a CM5 Cooler onto CM5, which you can then attach to an IO board and install into the CM5IO Case, see <>. [[case-assembly]] ==== Case assembly The following steps provide instructions for assembling the most recent version of the CM5IO Case (version 2). Version 1 doesn't allow space for both the fan and a CM5 Cooler at the same time without modification. If you have version 1 of the CM5IO Case, you can either remove the fan (described in step 4) or skip step 6 in the following instructions. For information about the different versions, see <>. To mount a CM5IO inside your case: . *Attach your CM5 to your CM5IO.* Rotate your CM5 90 degrees to the right to align the dual 100-pin connectors on your CM5 with those on your CM5IO and press gently but firmly to attach them. The mounting holes should also align. . *Open the case.* Unscrew and remove the four screws (two on the left side of the case and two on the right side of the case) using a Phillips screwdriver. Then, separate the top of the case from the base. Keep the screws in a safe place. . *Install your CM5IO assembly into the case.* Place your CM5IO (with CM5 attached) into the base of the case, aligning it with the four mounting holes near the corners of the board. Ensure all externally facing connectors align with the corresponding cut-outs at the front of the case. Then, secure your CM5IO assembly to the base by screwing four M2.5 screws into the four mounting holes. . *Connect or remove the fan.* ** If using the pre-installed fan, plug the fan connector into the four-pin fan socket labelled *FAN (J14)* on your CM5IO. ** If you want to remove the fan, unscrew the four corner screws of the fan from the underside of the top of the case. . *Optionally, attach an external antenna.* If you want to install an antenna, follow the instructions in <>. . *Optionally, attach a cooler.* If you want to install a cooler, follow the instructions in <>. If you're also attaching an antenna, attach the antenna's U.FL connector first for easier access. . *Optionally, attach a camera or display.* If you're using a camera or a display, pass the flat cable through one of the slots at the back of the case and connect it to one of the *CAM/DISP* ports on your CM5IO. . *Optionally, install an M.2 SSD.* If you want to install an M.2 SSD, insert it into the M.2 slot in the bottom-right corner of the CM5IO and secure it on the opposite end with a mounting screw. . *Optionally, install a HAT.* If you want to install a HAT, align it with the 40-pin GPIO header and the mounting posts such that the HAT covers the battery slot, then press it firmly into place and secure it with screws. . *Close the case.* Fold the top of the case back onto the base of the case, aligning the screw holes on the left and right sides of the case, and the power button on the back of the case. Screw the four screws back into place using a Phillips screwdriver, taking care not to overtighten them. NOTE: The SD card slot is a push-push slot. To insert an SD card, push it into the SD card slot with the contacts facing downwards. To remove it, push it inwards towards the slot to release it and then pull it out. [[antenna]] === Antenna (CM4 and CM5) The Raspberry Pi Antenna Kit provides a certified external antenna to boost wireless reception on a CM4 or CM5. .Antenna attached to a CM4 image::images/cm4-cm5-antenna.jpg[alt="The antenna connected to CM4", width="60%"] ==== Antenna specifications The antenna supports dual-band Wi-Fi and attaches to the https://en.wikipedia.org/wiki/Hirose_U.FL[U.FL connector] on your CM4 or CM5. The antenna is approximately 108.5 mm at full height and approximately 87.5 mm long when at a 90 degree angle; the SMA to U.FL cable is approximately 205 mm long. .CM4 and CM5 antenna physical specifications image::images/cm4-cm5-antenna-physical.png[alt="CM4 and CM5 antenna physical specification", width="80%"] [[install-antenna]] ==== Connect an antenna through the CM5IO Case You can use the antenna with the <>. To attach the antenna to your Compute Module through the CM5IO Case, complete the first four steps outlined in <>, and then complete following steps: . *Connect the U.FL connector.* Connect the U.FL connector on the antenna cable to the U.FL-compatible connector on your Compute Module, next to the top-left mounting hole of your CM5. Do this before attaching a cooler (if using one) because the cooler can make it harder to attach the U.FL connector. . *Insert the SMA connector.* Remove the rubber plug from the antenna port on the inside of the CM5IO Case. Then, from the inside of the case, push the SMA connector with the (flat side up) into the antenna port so that it extends through and is accessible from the outside. . *Fasten the SMA connector into place.* Twist the retaining hexagonal nut and washer onto the SMA connector in a clockwise direction until it sits securely in place. Avoid excessive twisting when tightening to prevent damage. . *Attach the antenna to the SMA connector.* Insert the SMA connector into the antenna port with the antenna facing outward and twist the antenna clockwise to secure it. . *Adjust the antenna.* Move the antenna into its final position by turning it up to a 90 degree angle. You can now complete the remaining steps outlined in <> for mounting a CM5IO inside your case. .CM4 and CM5 antenna assembly diagram image::images/cm4-cm5-antenna-assembly.svg[alt="CM4 and CM5 antenna assembly diagram", width="60%"] To use the Antenna with your Compute Module, add a `dtparam` instruction in xref:../computers/config_txt.adoc[`/boot/firmware/config.txt`]. Add the following line to the end of the `config.txt` file: `dtparam=ant2` [[cooler]] === CM5 Cooler The CM5 Cooler is a passive heat sink that helps dissipate heat from your CM5, improving CPU performance and longevity. .CM5 cooler image::images/cm5-cooler.jpg[alt="CM5 Cooler", width="60%"] ==== Cooler specifications The CM5 Cooler dimensions are approximately 41 mm × 56 mm × 12.7 mm. The cooler is an aluminium heat sink with a conductive silicone pad on the bottom. Newer versions of the <> allow both the cooler and pre-installed fan to be used inside the case at the same time. If you have an older version of the CM5IO Case, you must remove the fan from the case to allow space for the cooler. .CM5 cooler physical specifications image::images/cm5-cooler-physical.png[alt="CM5 Cooler physical specification", width="80%"] [[mounting]] ==== Mount a CM5 Cooler To mount the cooler to your CM5: . Remove the protective paper from the silicone pad on the bottom of cooler. . Attach the silicone at the bottom of the cooler to the top of your CM5. Place the cooler on your CM5 such that the cutout in the cooler is above the on-board antenna (the trapezoid-shaped area on the left of a CM5) and the https://en.wikipedia.org/wiki/Hirose_U.FL[U.FL connector] next to it (if it has one). . Optionally, fasten screws in the mounting points found in each corner to secure the cooler. If you omit the screws, the bond between your cooler and your CM5 improves through time and use. --- # Source: compute-module.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::compute-module/introduction.adoc[] include::compute-module/cm-emmc-flashing.adoc[] include::compute-module/cm-bootloader.adoc[] include::compute-module/cm-peri-sw-guide.adoc[] include::compute-module/cmio-camera.adoc[] include::compute-module/cmio-display.adoc[] include::compute-module/datasheet.adoc[] --- # Source: audio.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Onboard analogue audio (3.5 mm jack) The onboard audio output uses config options to change the way the analogue audio is driven, and whether some firmware features are enabled or not. === `audio_pwm_mode` `audio_pwm_mode=1` selects legacy low-quality analogue audio from the 3.5 mm AV jack. `audio_pwm_mode=2` (the default) selects high quality analogue audio using an advanced modulation scheme. NOTE: This option uses more GPU compute resources and can interfere with some use cases on some models. === `disable_audio_dither` By default, a 1.0LSB dither is applied to the audio stream if it is routed to the analogue audio output. This can create audible background hiss in some situations, for example when the ALSA volume is set to a low level. Set `disable_audio_dither` to `1` to disable dither application. === `enable_audio_dither` Audio dither (see disable_audio_dither above) is normally disabled when the audio samples are larger than 16 bits. Set this option to `1` to force the use of dithering for all bit depths. === `pwm_sample_bits` The `pwm_sample_bits` command adjusts the bit depth of the analogue audio output. The default bit depth is `11`. Selecting bit depths below `8` will result in nonfunctional audio, as settings below `8` result in a PLL frequency too low to support. This is generally only useful as a demonstration of how bit depth affects quantisation noise. == HDMI audio By default, HDMI audio output is enabled on all Raspberry Pi models with HDMI output. To disable HDMI audio output, append `,noaudio` to the end of the `dtoverlay=vc4-kms-v3d` line in xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`]: [source,ini] ---- dtoverlay=vc4-kms-v3d,noaudio ---- --- # Source: autoboot.adoc *Note: This file could not be automatically converted from AsciiDoc.* == `autoboot.txt` `autoboot.txt` is an optional configuration file that can be used to specify the `boot_partition` number. This can also be used in conjunction with the `tryboot` feature to implement A/B booting for OS upgrades. `autoboot.txt` is limited to 512 bytes and supports the `[all]`, `[none]` and `[tryboot]` xref:config_txt.adoc#conditional-filters[conditional] filters. See also xref:raspberry-pi.adoc#fail-safe-os-updates-tryboot[TRYBOOT] boot flow. === `boot_partition` Specifies the partition number for booting unless the partition number was already specified as a parameter to the `reboot` command (e.g. `sudo reboot 2`). Partition numbers start at `1` and the MBR partitions are `1` to `4`. Specifying partition `0` means boot from the `default` partition which is the first bootable FAT partition. Bootable partitions must be formatted as FAT12, FAT16 or FAT32 and contain a `start.elf` file (or `config.txt` file on Raspberry Pi 5) in order to be classed as be bootable by the bootloader. === The `[tryboot]` filter This filter passes if the system was booted with the `tryboot` flag set. [source,console] ---- $ sudo reboot "0 tryboot" ---- === `tryboot_a_b` Set this property to `1` to load the normal `config.txt` and `boot.img` files instead of `tryboot.txt` and `tryboot.img` when the `tryboot` flag is set. This enables the `tryboot` switch to be made at the partition level rather than the file-level without having to modify configuration files in the A/B partitions. === Example update flow for A/B booting The following pseudo-code shows how a hypothetical OS `Update service` could use `tryboot` in `autoboot.txt` to perform a fail-safe OS upgrade. Initial `autoboot.txt`: [source,ini] ---- [all] tryboot_a_b=1 boot_partition=2 [tryboot] boot_partition=3 ---- **Installing the update** * System is powered on and boots to partition 2 by default * An `Update service` downloads the next version of the OS to partition 3 * The update is tested by rebooting to `tryboot` mode `reboot "0 tryboot"` where `0` means the default partition **Committing or cancelling the update** * System boots from partition 3 because the `[tryboot]` filter evaluates to true in `tryboot mode` * If tryboot is active (`/proc/device-tree/chosen/bootloader/tryboot == 1`) ** If the current boot partition (`/proc/device-tree/chosen/bootloader/partition`) matches the `boot_partition` in the `[tryboot]` section of `autoboot.txt` *** The `Update Service` validates the system to verify that the update was successful *** If the update was successful **** Replace `autoboot.txt` swapping the `boot_partition` configuration **** Normal reboot - partition 3 is now the default boot partition *** Else **** `Update Service` marks the update as failed e.g. it removes the update files. **** Normal reboot - partition 2 is still the default boot partition because the `tryboot` flag is automatically cleared *** End if ** End If * End If Updated `autoboot.txt`: [source,ini] ---- [all] tryboot_a_b=1 boot_partition=3 [tryboot] boot_partition=2 ---- [NOTE] ====== It's not mandatory to reboot after updating `autoboot.txt`. However, the `Update Service` must be careful to avoid overwriting the current partition since `autoboot.txt` has already been modified to commit the last update. For more information, see xref:configuration.adoc#device-trees-overlays-and-parameters[Device Tree parameters]. ====== --- # Source: boot.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Boot Options === `start_file`, `fixup_file` These options specify the firmware files transferred to the VideoCore GPU prior to booting. `start_file` specifies the VideoCore firmware file to use. `fixup_file` specifies the file used to fix up memory locations used in the `start_file` to match the GPU memory split. The `start_file` and the `fixup_file` are a matched pair - using unmatched files will stop the board from booting. This is an advanced option, so we advise that you use `start_x` and `start_debug` rather than this option. NOTE: Cut-down firmware (`start*cd.elf` and `fixup*cd.dat`) cannot be selected this way - the system will fail to boot. The only way to enable the cut-down firmware is to specify `gpu_mem=16`. The cut-down firmware removes support for codecs, 3D and debug logging as well as limiting the initial early-boot framebuffer to 1080p @16bpp - although KMS can replace this with up to 32bpp 4K framebuffer(s) at a later stage as with any firmware. NOTE: The Raspberry Pi 5, 500, 500+, and Compute Module 5 firmware is self-contained in the bootloader EEPROM. === `cmdline` `cmdline` is the alternative filename on the boot partition from which to read the kernel command line string; the default value is `cmdline.txt`. === `kernel` `kernel` is the alternative filename on the boot partition for loading the kernel. The default value on the Raspberry Pi 1, Zero and Zero W, and Raspberry Pi Compute Module 1 is `kernel.img`. The default value on the Raspberry Pi 2, 3, 3+ and Zero 2 W, and Raspberry Pi Compute Modules 3 and 3+ is `kernel7.img`. The default value on the Raspberry Pi 4 and 400, and Raspberry Pi Compute Module 4 is `kernel8.img`, or `kernel7l.img` if `arm_64bit` is set to 0. The Raspberry Pi 5, 500, 500+, and Compute Module 5 firmware defaults to loading `kernel_2712.img` because this image contains optimisations specific to those models (e.g. 16K page-size). If this file is not present, then the common 64-bit kernel (`kernel8.img`) will be loaded instead. === `arm_64bit` If set to 1, the kernel will be started in 64-bit mode. Setting to 0 selects 32-bit mode. In 64-bit mode, the firmware will choose an appropriate kernel (e.g. `kernel8.img`), unless there is an explicit `kernel` option defined, in which case that is used instead. Defaults to 1 on Raspberry Pi 4, 400 and Compute Module 4, 4S platforms. Defaults to 0 on all other platforms. However, if the name given in an explicit `kernel` option matches one of the known kernels then `arm_64bit` will be set accordingly. 64-bit kernels come in the following forms: * uncompressed image files * gzip archives of an image Both forms may use the `img` file extension; the bootloader recognizes archives using signature bytes at the start of the file. The following Raspberry Pi models support this flag: * 2B rev 1.2 * 3B * 3A+ * 3B+ * 4B * 400 * Zero 2 W * Compute Module 3 * Compute Module 3+ * Compute Module 4 * Compute Module 4S Flagship models since Raspberry Pi 5, Compute Modules since CM5, and Keyboard models since Pi 500 _only_ support the 64-bit kernel. Models that only support a 64-bit kernel ignore this flag. === `armstub` `armstub` is the filename on the boot partition from which to load the Arm stub. The default Arm stub is stored in firmware and is selected automatically based on the Raspberry Pi model and various settings. The stub is a small piece of Arm code that is run before the kernel. Its job is to set up low-level hardware like the interrupt controller before passing control to the kernel. === `ramfsfile` `ramfsfile` is the optional filename on the boot partition of a `ramfs` to load. NOTE: Newer firmware supports the loading of multiple `ramfs` files. You should separate the multiple file names with commas, taking care not to exceed the 80-character line length limit. All the loaded files are concatenated in memory and treated as a single `ramfs` blob. More information is available https://forums.raspberrypi.com/viewtopic.php?f=63&t=10532[on the forums]. === `ramfsaddr` `ramfsaddr` is the memory address to which the `ramfsfile` should be loaded. [[initramfs]] === `initramfs` The `initramfs` command specifies both the ramfs filename *and* the memory address to which to load it. It performs the actions of both `ramfsfile` and `ramfsaddr` in one parameter. The address can also be `followkernel` (or `0`) to place it in memory after the kernel image. Example values are: `initramfs initramf.gz 0x00800000` or `initramfs init.gz followkernel`. As with `ramfsfile`, newer firmwares allow the loading of multiple files by comma-separating their names. NOTE: This option uses different syntax from all the other options, and you should not use the `=` character here. [[auto_initramfs]] === `auto_initramfs` If `auto_initramfs` is set to `1`, the firmware looks for an `initramfs` file to match the kernel. The file must be in the same location as the kernel image, and the name is derived from the name of the kernel by replacing the `kernel` prefix with `initramfs`, and removing any extension such as `.img`, e.g. `kernel8.img` requires `initramfs8`. You can make use of `auto_initramfs` with custom kernel names provided the names begin with `kernel` and `initramfs` respectively and everything else matches (except for the absence of the file extension on the initramfs). Otherwise, an explicit xref:config_txt.adoc#initramfs[`initramfs`] statement is required. [[disable_poe_fan]] === `disable_poe_fan` By default, a probe on the I2C bus will happen at startup, even when a PoE HAT is not attached. Setting this option to 1 disables control of a PoE HAT fan through I2C (on pins ID_SD & ID_SC). If you are not intending to use a PoE HAT, this is a helpful way to minimise boot time. === `disable_splash` If `disable_splash` is set to `1`, the rainbow splash screen will not be shown on boot. The default value is `0`. === `enable_uart` `enable_uart=1` (in conjunction with `console=serial0,115200` in `cmdline.txt`) requests that the kernel creates a serial console, accessible using GPIOs 14 and 15 (pins 8 and 10 on the 40-pin header). Editing `cmdline.txt` to remove the line `quiet` enables boot messages from the kernel to also appear there. See also `uart_2ndstage`. === `force_eeprom_read` Set this option to `0` to prevent the firmware from trying to read an I2C HAT EEPROM (connected to pins ID_SD & ID_SC) at power up. See also xref:config_txt.adoc#disable_poe_fan[`disable_poe_fan`]. [[os_prefix]] === `os_prefix` `os_prefix` is an optional setting that allows you to choose between multiple versions of the kernel and Device Tree files installed on the same card. Any value in `os_prefix` is prepended to the name of any operating system files loaded by the firmware, where "operating system files" is defined to mean kernels, `initramfs`, `cmdline.txt`, `.dtbs` and overlays. The prefix would commonly be a directory name, but it could also be part of the filename such as "test-". For this reason, directory prefixes must include the trailing `/` character. In an attempt to reduce the chance of a non-bootable system, the firmware first tests the supplied prefix value for viability - unless the expected kernel and .dtb can be found at the new location/name, the prefix is ignored (set to ""). A special case of this viability test is applied to overlays, which will only be loaded from `+${os_prefix}${overlay_prefix}+` (where the default value of <> is "overlays/") if `+${os_prefix}${overlay_prefix}README+` exists, otherwise it ignores `os_prefix` and treats overlays as shared. (The reason the firmware checks for the existence of key files rather than directories when checking prefixes is twofold: the prefix may not be a directory, and not all boot methods support testing for the existence of a directory.) NOTE: Any user-specified OS file can bypass all prefixes by using an absolute path (with respect to the boot partition) - just start the file path with a `/`, e.g. `kernel=/my_common_kernel.img`. See also <> and xref:legacy_config_txt.adoc#upstream_kernel[`upstream_kernel`]. === `otg_mode` (Raspberry Pi 4 only) USB On-The-Go (often abbreviated to OTG) is a feature that allows supporting USB devices with an appropriate OTG cable to configure themselves as USB hosts. On older Raspberry Pis, a single USB 2 controller was used in both USB host and device mode. Flagship models since Raspberry Pi 4B and Keyboard models since Pi 400 add a high-performance USB 3 controller, attached via PCIe, to drive the main USB ports. The legacy USB 2 controller is still available on the USB-C power connector for use as a device (`otg_mode=0`, the default). Compute Modules before CM5 do not include this high-performance USB 3 controller. `otg_mode=1` requests that a more capable XHCI USB 2 controller is used as an alternative host controller on that USB-C connector. NOTE: By default, Raspberry Pi OS includes a line in `/boot/firmware/config.txt` that enables this setting on Compute Module 4. [[overlay_prefix]] === `overlay_prefix` Specifies a subdirectory/prefix from which to load overlays, and defaults to `overlays/` (note the trailing `/`). If used in conjunction with <>, the `os_prefix` comes before the `overlay_prefix`, i.e. `dtoverlay=disable-bt` will attempt to load `+${os_prefix}${overlay_prefix}disable-bt.dtbo+`. NOTE: Unless `+${os_prefix}${overlay_prefix}README+` exists, overlays are shared with the main OS (i.e. `os_prefix` is ignored). === Configuration Properties Raspberry Pi 5 requires a `config.txt` file to be present to indicate that the partition is bootable. [[boot_ramdisk]] ==== `boot_ramdisk` If this property is set to `1` then the bootloader will attempt load a ramdisk file called `boot.img` containing the xref:configuration.adoc#boot-folder-contents[boot filesystem]. Subsequent files (e.g. `start4.elf`) are read from the ramdisk instead of the original boot file system. The primary purpose of `boot_ramdisk` is to support `secure-boot`, however, unsigned `boot.img` files can also be useful to Network Boot or `RPIBOOT` configurations. * The maximum size for a ramdisk file is 96 MB. * `boot.img` files are raw disk `.img` files. The recommended format is a plain FAT32 partition with no MBR. * The memory for the ramdisk filesystem is released before the operating system is started. * If xref:raspberry-pi.adoc#fail-safe-os-updates-tryboot[TRYBOOT] is selected then the bootloader will search for `tryboot.img` instead of `boot.img`. * See also xref:config_txt.adoc#autoboot-txt[autoboot.txt]. For more information about `secure-boot` and creating `boot.img` files please see https://github.com/raspberrypi/usbboot/blob/master/Readme.md[USBBOOT]. Default: `0` [[boot_load_flags]] ==== `boot_load_flags` Experimental property for custom firmware (bare metal). Bit 0 (0x1) indicates that the .elf file is custom firmware. This disables any compatibility checks (e.g. is USB MSD boot supported) and resets PCIe before starting the executable. Not relevant on Raspberry Pi 5 because there is no `start.elf` file. Default: `0x0` [[enable_rp1_uart]] ==== `enable_rp1_uart` Raspberry Pi 5 only. When set to `1`, firmware initialises RP1 UART0 to 115200bps and doesn't reset RP1 before starting the OS (separately configurable using `pciex4_reset=1`). This makes it easier to get UART output on the 40-pin header in early boot-code, for instance during bare-metal debug. Default: `0x0` [[pciex4_reset]] ==== `pciex4_reset` Raspberry Pi 5 only. By default, the PCIe x4 controller used by `RP1` is reset before starting the operating system. If this parameter is set to `0` then the reset is disabled allowing operating system or bare metal code to inherit the PCIe configuration setup from the bootloader. Default: `1` [[sha256]] ==== `sha256` If set to non-zero, enables the logging of SHA256 hashes for loaded files (the kernel, initramfs, Device Tree .dtb file, and overlays), as generated by the `sha256sum` utility. The logging output goes to the UART if enabled, and is also accessible via `sudo vclog --msg`. This option may be useful when debugging boot problems, but at the cost of potentially adding _many_ seconds to the boot time. Defaults to 0 on all platforms. [[uart_2ndstage]] ==== `uart_2ndstage` If `uart_2ndstage` is `1` then enable debug logging to the UART. This option also automatically enables UART logging in `start.elf`. This is also described on the xref:config_txt.adoc#boot-options[Boot options] page. The `BOOT_UART` property also enables bootloader UART logging but does not enable UART logging in `start.elf` unless `uart_2ndstage=1` is also set. Default: `0` [[erase_eeprom]] ==== `erase_eeprom` If `erase_eeprom` is set to `1` then `recovery.bin` will erase the entire SPI EEPROM instead of flashing the bootloader image. This property has no effect during a normal boot. Default: `0` [[set_reboot_arg1]] ==== `set_reboot_arg1` Raspberry Pi 5 only. Sets the value of `boot_arg1` to be passed via a reset-safe register to the bootloader after a reboot. See xref:config_txt.adoc#boot_arg1[`boot_arg1`] for more details. Default: `` [[set_reboot_order]] ==== `set_reboot_order` Raspberry Pi 5 only. Sets the value of xref:raspberry-pi.adoc#BOOT_ORDER[BOOT_ORDER] to be passed via a reset-safe register to the bootloader after a reboot. As with `tryboot`, this is a one-time setting and is automatically cleared after use. This property could be used to debug different xref:raspberry-pi.adoc#BOOT_ORDER[BOOT_ORDER] settings. Alternatively, it could be used in a provisioning system which has control over power and the `nRPIBOOT` GPIO to override the boot mode without specifying xref:config_txt.adoc#conditional-filters[conditional filter] statements in the EEPROM config. Default: `` [[kernel_watchdog_timeout]] ==== `kernel_watchdog_timeout` If set to a non-zero value (in seconds), this property enables a hardware watchdog timer that is handed over to the operating system (OS) at boot. If the OS does not regularly "kick" or reset the watchdog, the system will be reset after the specified timeout. This property sets the `systemd` `watchdog.open_timeout` parameter, which controls how long the OS has to initialize and start servicing the watchdog. The value is passed to the OS via the kernel command line. For ongoing operation, the OS must also regularly reset the watchdog, typically controlled by the `RuntimeWatchdogSec` parameter in `systemd`. For more information, see https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html#RuntimeWatchdogSec=[systemd watchdog documentation]. [NOTE] ==== On Raspberry Pi OS _Bookworm_ and earlier, the `RuntimeWatchdogSec` parameter is **not enabled by default** and this setting must be configured first in `/etc/systemd/system.conf` before the firmware kernel watchdog can be used. If both `BOOT_WATCHDOG_TIMEOUT` (EEPROM/bootloader setting, only supported on Raspberry Pi 4 and 5) and `kernel_watchdog_timeout` are set, the bootloader will seamlessly hand over from the bootloader watchdog to the kernel watchdog at the point the OS is started. This provides continuous watchdog coverage from power-on through to OS runtime. It is preferred to use `kernel_watchdog_timeout` rather than `dtparam=watchdog` because `kernel_watchdog_timeout` explicitly sets the `open_timeout` parameter, ensuring the watchdog is active until systemd takes over. ==== This is useful for ensuring that the system can recover from OS hangs or crashes after the boot process has completed. Default: `0` (disabled) [[kernel_watchdog_partition]] ==== `kernel_watchdog_partition` If the kernel watchdog triggers (i.e. the OS fails to reset the watchdog within the timeout), this property specifies the partition number to boot from after the reset. This allows for automatic failover to a recovery or alternate partition. You can use this in conjunction with the xref:config_txt.adoc#the-expression-filter[expression filter] to apply different settings or select a different boot flow when the watchdog triggers a reboot to a specific partition. See also the xref:raspberry-pi.adoc#PARTITION[PARTITION] property for more information about how to use high partition numbers to detect a watchdog trigger. Default: `0` (default partition) [[eeprom_write_protect]] ==== `eeprom_write_protect` Configures the EEPROM `write status register`. This can be set either to mark the entire EEPROM as write-protected, or to clear write-protection. This option must be used in conjunction with the EEPROM `/WP` pin which controls updates to the EEPROM `Write Status Register`. Pulling `/WP` low (CM4 `EEPROM_nWP` or on a Raspberry Pi 4 `TP5`) does NOT write-protect the EEPROM unless the `Write Status Register` has also been configured. See the https://www.winbond.com/resource-files/w25x40cl_f%2020140325.pdf[Winbond W25x40cl] or https://www.winbond.com/hq/product/code-storage-flash-memory/serial-nor-flash/?__locale=en&partNo=W25Q16JV[Winbond W25Q16JV] datasheets for further details. `eeprom_write_protect` settings in `config.txt` for `recovery.bin`. |=== | Value | Description | 1 | Configures the write protect regions to cover the entire EEPROM. | 0 | Clears the write protect regions. | -1 | Do nothing. |=== NOTE: `flashrom` does not support clearing of the write-protect regions and will fail to update the EEPROM if write-protect regions are defined. On Raspberry Pi 5 `/WP` is pulled low by default and consequently write-protect is enabled as soon as the `Write Status Register` is configured. To clear write-protect pull `/WP` high by connecting `TP14` and `TP1`. Default: `-1` [[os_check]] ==== `os_check` On Raspberry Pi 5 the firmware automatically checks for a compatible Device Tree file before attempting to boot from the current partition. Otherwise, older non-compatible kernels would be loaded and then hang. To disable this check (e.g. for bare-metal development), set `os_check=0` in config.txt Default: `1` [[bootloader_update]] ==== `bootloader_update` This option may be set to 0 to block self-update without requiring the EEPROM configuration to be updated. This is sometimes useful when updating multiple Raspberry Pis via network boot because this option can be controlled per Raspberry Pi (e.g. via a serial number filter in `config.txt`). Default: `1` === Secure Boot configuration properties [.whitepaper, title="How to use Raspberry Pi Secure Boot", subtitle="", link=https://pip.raspberrypi.com/documents/RP-003466-WP-Boot-Security-Howto.pdf] **** This whitepaper describes how to implement secure boot on devices based on Raspberry Pi 4. For an overview of our approach to implementing secure boot implementation, please see the https://pip.raspberrypi.com/documents/RP-004651-WP-Raspberry-Pi-4-Boot-Security.pdf[Raspberry Pi 4 Boot Security] whitepaper. The secure boot system is intended for use with `buildroot`-based OS images; using it with Raspberry Pi OS is not recommended or supported. **** The following `config.txt` properties are used to program the `secure-boot` OTP settings. These changes are irreversible and can only be programmed via `RPIBOOT` when flashing the bootloader EEPROM image. This ensures that `secure-boot` cannot be set remotely or by accidentally inserting a stale SD card image. For more information about enabling `secure-boot` please see the https://github.com/raspberrypi/usbboot/blob/master/Readme.md#secure-boot[Secure Boot readme] and the https://github.com/raspberrypi/usbboot/blob/master/secure-boot-example/README.md[Secure Boot tutorial] in the https://github.com/raspberrypi/usbboot[USBBOOT] repo. [[program_pubkey]] ==== `program_pubkey` If this property is set to `1` then `recovery.bin` will write the hash of the public key in the EEPROM image to OTP. Once set, the bootloader will reject EEPROM images signed with different RSA keys or unsigned images. Default: `0` [[revoke_devkey]] ==== `revoke_devkey` Raspberry Pi 4 only. If this property is set to `1` then `recovery.bin` will write a value to OTP that prevents the ROM from loading old versions of the second stage bootloader which do not support `secure-boot`. This prevents `secure-boot` from being turned off by reverting to an older release of the bootloader. Therefore, this property must be set if `secure-boot` is enabled on production devices. This property is automatically is set by `recovery.bin` `2025/05/16` and newer if `program_pubkey=1`. Default: `0` [[program_rpiboot_gpio]] ==== `program_rpiboot_gpio` Raspberry Pi 4B and Raspberry Pi 400 only. Compute Module 4 and 4S have a dedicated `nRPIBOOT` jumper to select `RPIBOOT` mode. Raspberry Pi 4B and Raspberry Pi 400 lack a dedicated `nRPIBOOT` jumper so one of the following GPIOs must be selected for use as `nRPIBOOT`. * `2` * `4` * `5` * `6` * `7` * `8` The GPIO may be used as a general-purpose I/O pin after the OS has started. However, you should verify that this GPIO configuration does not conflict with any HATs which might pull the GPIO low during boot. Although `secure-boot` requires this property to be set on Raspberry Pi 4B and Raspberry Pi 400, it does not depend on `secure-boot`. For example, `RPIBOOT` can be useful for automated testing. For safety, this OTP value can _only_ be programmed via `RPIBOOT`. As a result, you must first clear the bootloader EEPROM using `erase_eeprom`. The blank EEPROM causes the ROM to failover to `RPIBOOT` mode, which then allows this option to be set. Default: `{nbsp}` [[program_jtag_lock]] ==== `program_jtag_lock` If this property is set to `1` then `recovery.bin` will program an OTP value that prevents VideoCore JTAG from being used. This option requires that `program_pubkey` and `revoke_devkey` are also set. This option can prevent failure analysis, and should only be set after the device has been fully tested. Default: `0` --- # Source: camera.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Camera settings === `disable_camera_led` Setting `disable_camera_led` to `1` prevents the red camera LED from turning on when recording video or taking a still picture. This is useful for preventing reflections, for example when the camera is facing a window. === `awb_auto_is_greyworld` Setting `awb_auto_is_greyworld` to `1` allows libraries or applications that do not support the greyworld option internally to capture valid images and videos with NoIR cameras. It switches auto awb mode to use the greyworld algorithm. This should only be needed for NoIR cameras, or when the High Quality camera has had its xref:../accessories/camera.adoc#filter-removal[IR filter removed]. --- # Source: codeclicence.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Licence key and codec options Hardware decoding of additional codecs on the Raspberry Pi 3 and earlier models can be enabled by https://codecs.raspberrypi.com/license-keys/[purchasing a licence] that is locked to the CPU serial number of your Raspberry Pi. The Raspberry Pi 4 has permanently disabled hardware decoders for MPEG2 and VC1. These codecs cannot be enabled, so a hardware codec licence key is not needed. Software decoding of MPEG2 and VC1 files performs well enough for typical use cases. The Raspberry Pi 5 has H.265 (HEVC) hardware decoding. This decoding is enabled by default, so a hardware codec licence key is not needed. === `decode_MPG2` `decode_MPG2` is a licence key to allow hardware MPEG-2 decoding, e.g. `decode_MPG2=0x12345678`. === `decode_WVC1` `decode_WVC1` is a licence key to allow hardware VC-1 decoding, e.g. `decode_WVC1=0x12345678`. If you have multiple Raspberry Pis and you've bought a codec licence for each of them, you can list up to eight licence keys in a single `config.txt`, for example `decode_MPG2=0x12345678,0xabcdabcd,0x87654321`. This enables you to swap the same SD card between the different Raspberry Pis without having to edit `config.txt` each time. --- # Source: common.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Common options === Common display options ==== `hdmi_enable_4kp60` NOTE: This option applies only to Raspberry Pi 4, Compute Module 4, Compute Module 4S, and Pi 400. By default, when connected to a 4K monitor, certain models select a 30Hz refresh rate. Use this option to allow selection of 60Hz refresh rates. Models impacted by this setting do _not_ support 4Kp60 output on both micro HDMI ports simultaneously. Enabling this setting increases power consumption and temperature. === Common hardware configuration options ==== `camera_auto_detect` By default, Raspberry Pi OS includes a line in `/boot/firmware/config.txt` that enables this setting. When enabled, the firmware will automatically load overlays for recognised CSI cameras. To disable, set `camera_auto_detect=0` (or remove `camera_auto_detect=1`). ==== `display_auto_detect` By default, Raspberry Pi OS includes a line in `/boot/firmware/config.txt` that enables this setting. When enabled, the firmware will automatically load overlays for recognised DSI displays. To disable, set `display_auto_detect=0` (or remove `display_auto_detect=1`). ==== `dtoverlay` The `dtoverlay` option requests the firmware to load a named Device Tree overlay - a configuration file that can enable kernel support for built-in and external hardware. For example, `dtoverlay=vc4-kms-v3d` loads an overlay that enables the kernel graphics driver. As a special case, if called with no value - `dtoverlay=` - the option marks the end of a list of overlay parameters. If used before any other `dtoverlay` or `dtparam` setting, it prevents the loading of any HAT overlay. For more details, see xref:configuration.adoc#part3.1[DTBs, overlays and config.txt]. ==== `dtparam` Device Tree configuration files for Raspberry Pi devices support various parameters for such things as enabling I2C and SPI interfaces. Many DT overlays are configurable via the use of parameters. Both types of parameters can be supplied using the `dtparam` setting. In addition, overlay parameters can be appended to the `dtoverlay` option, separated by commas, but keep in mind the line length limit of 98 characters. For more details, see xref:configuration.adoc#part3.1[DTBs, overlays and config.txt]. ==== `arm_boost` NOTE: This option applies only to later Raspberry Pi 4B revisions which include two-phase power delivery, and all revisions of Pi 400. By default, Raspberry Pi OS includes a line in `/boot/firmware/config.txt` that enables this setting on supported devices. Some Raspberry Pi devices have a second switch-mode power supply for the SoC voltage rail. When enabled, increases the default turbo-mode clock from 1.5 GHz to 1.8 GHz. To disable, set `arm_boost=0`. ==== `power_force_3v3_pwm` NOTE: This option applies only to Raspberry Pi 5, 500, 500+, and Compute Module 5. Forces PWM on 3.3V output from the GPIO header or CSI connector. To disable, set `power_force_3v3_pwm=0`. --- # Source: conditional.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[conditional-filters]] == Conditional filters When a single SD card (or card image) is being used with one Raspberry Pi and one monitor, it is easy to set `config.txt` as required for that specific combination and keep it that way, amending it only when something changes. However, if one Raspberry Pi is swapped between different monitors, or if the SD card (or card image) is being swapped between multiple boards, a single set of settings may no longer be sufficient. Conditional filters allow you to define certain sections of the config file to be used only in specific cases, allowing a single `config.txt` to create different configurations when read by different hardware. === The `[all]` filter The `[all]` filter is the most basic filter. It resets all previously set filters and allows any settings listed below it to be applied to all hardware. It is usually a good idea to add an `[all]` filter at the end of groups of filtered settings to avoid unintentionally combining filters (see below). === Model filters The conditional model filters apply according to the following table. |=== | Filter | Applicable models | `[pi1]` | 1A, 1B, 1A+, 1B+, Compute Module 1 | `[pi2]` | 2B (BCM2836- or BCM2837-based) | `[pi3]` | 3B, 3B+, 3A+, Compute Module 3, Compute Module 3+ | `[pi3+]` | 3A+, 3B+ (also sees `[pi3]` contents) | `[pi4]` | 4B, 400, Compute Module 4, Compute Module 4S | `[pi5]` | 5, 500, 500+, Compute Module 5 | `[pi400]` | 400 (also sees `[pi4]` contents) | `[pi500]` | 500/500+ (also sees `[pi5]` contents) | `[cm1]` | Compute Module 1 (also sees `[pi1]` contents) | `[cm3]` | Compute Module 3 (also sees `[pi3]` contents) | `[cm3+]` | Compute Module 3+ (also sees `[pi3+]` contents) | `[cm4]` | Compute Module 4 (also sees `[pi4]` contents) | `[cm4s]` | Compute Module 4S (also sees `[pi4]` contents) | `[cm5]` | Compute Module 5 (also sees `[pi5]` contents) | `[pi0]` | Zero, Zero W, Zero 2 W | `[pi0w]` | Zero W (also sees `[pi0]` contents) | `[pi02]` | Zero 2 W (also sees `[pi0w]` and `[pi0]` contents) | `[board-type=Type]` | Filter by `Type` number - see xref:raspberry-pi.adoc#raspberry-pi-revision-codes[Raspberry Pi Revision Codes] E.g `[board-type=0x14]` would match CM4. |=== These are particularly useful for defining different `kernel`, `initramfs`, and `cmdline` settings, as the Raspberry Pi 1 and Raspberry Pi 2 require different kernels. They can also be useful to define different overclocking settings, as the Raspberry Pi 1 and Raspberry Pi 2 have different default speeds. For example, to define separate `initramfs` images for each: ---- [pi1] initramfs initrd.img-3.18.7+ followkernel [pi2] initramfs initrd.img-3.18.7-v7+ followkernel [all] ---- Remember to use the `[all]` filter at the end, so that any subsequent settings aren't limited to Raspberry Pi 2 hardware only. [NOTE] ==== Some models of Raspberry Pi, including Zero, Compute Module, and Keyboard models, read settings from multiple filters. To apply a setting to only one model: * apply the setting to the base model (e.g. `[pi4]`), then revert the setting for all models that read the base model's filters (e.g. `[pi400]`, `[cm4]`, `[cm4s]`) * use the `board-type` filter with a revision code to target a single model (e.g. `[board-type=0x11]`) ==== === The `[none]` filter The `[none]` filter prevents any settings that follow from being applied to any hardware. Although there is nothing that you can't do without `[none]`, it can be a useful way to keep groups of unused settings in config.txt without having to comment out every line. [source,ini] ---- # Bootloader EEPROM config. # If PM_RSTS is partition 62 then set bootloader properties to disable # SD high speed and show HDMI diagnostics # Boot from partition 2 with debug option. [partition=62] # Only high (>31) partition can be remapped. PARTITION=2 SD_QUIRKS=0x1 HDMI_DELAY=0 ---- Example `config.txt` - (Currently Raspberry Pi 5 onwards) [source,ini] ---- # config.txt - If the original requested partition number in PM_RSTS was a # special number then use an alternate cmdline.txt [partition=62] cmdline=cmdline-recovery.txt ---- The raw value of the `PM_RSTS` register at bootup is available via `/proc/device-tree/chosen/bootloader/rsts` and the final partition number used for booting is available via `/proc/device-tree/chosen/bootloader/partition`. These are big-endian binary values. === The expression filter The expression filter provides support for comparing unsigned integer "boot variables" to constants using a simple set of operators. It is intended to support OTA update mechanisms, debug and test. * The "boot variables" are `boot_arg1`, `boot_count`, `boot_partition` and `partition`. * Boot variables are always lower case. * Integer constants may either be written as decimal or as hex. * Expression conditional filters have no side-effects e.g. no assignment operators. * As with other filter types the expression filter cannot be nested. * Use the `[all]` filter to reset expressions and all other conditional filter types. Syntax: [source,ini] ---- # ARG is a boot-variable # VALUE and MASK are unsigned integer constants [ARG=VALUE] # selected if (ARG == VALUE) [ARG&MASK] # selected if ((ARG & VALUE) != 0) [ARG&MASK=VALUE] # selected if ((ARG & MASK) == VALUE) [ARGVALUE] # selected if (ARG > VALUE) ---- ==== `boot_arg1` Raspberry Pi 5 and newer devices only. The `boot_arg1` variable is a 32-bit user defined value which is stored in a reset-safe register allowing parameters to be passed across a reboot. Setting `boot_arg1` to 42 via `config.txt`: [source,ini] ---- set_reboot_arg1=42 ---- The `set_reboot_arg1` property sets the value for the next boot. It does not change the current value as seen by the config parser. Setting `boot_arg1` to 42 via vcmailbox: [source,console] ---- sudo vcmailbox 0x0003808c 8 8 1 42 ---- Reading `boot_arg1` via vcmailbox: [source,console] ---- sudo vcmailbox 0x0003008c 8 8 1 0 # Example output - boot_arg1 is 42 # 0x00000020 0x80000000 0x0003008c 0x00000008 0x80000008 0x00000001 0x0000002a 0x0000000 ---- The value of the `boot_arg1` variable when the OS was started can be read via xref:configuration.adoc#part4[device-tree] at `/proc/device-tree/chosen/bootloader/arg1` ==== `bootvar0` Raspberry Pi 4 and newer devices only. The `bootvar0` variable is a 32-bit user-defined value that is set through `rpi-eeprom-config`, and then can be used as a conditional variable in `config.txt`. For example, setting `bootvar0` to 42 via `rpi-eeprom-config`: [source,ini] ---- BOOTVAR0=42 ---- Using `bootvar0` conditionally in `config.txt`: [source,ini] ---- [bootvar0=42] arm_freq=1000 ---- This allows a common image (that is, with the same `config.txt` file) to support different configurations based on the persistent `rpi-eeprom-config` settings. ==== `boot_count` Raspberry Pi 5 and newer devices only. The `boot_count` variable is an 8-bit value stored in a reset-safe register that is incremented at boot (wrapping back to zero at 256). It is cleared if power is disconnected. To read `boot_count` via vcmailbox: [source,console] ---- sudo vcmailbox 0x0003008d 4 4 0 # Example - boot count is 3 # 0x0000001c 0x80000000 0x0003008d 0x00000004 0x80000004 0x00000003 0x00000000 ---- Setting/clearing `boot_count` via vcmailbox: [source,console] ---- # Clear boot_count by setting it to zero. sudo vcmailbox 0x0003808d 4 4 0 ---- The value of `boot_count` when the OS was started can be read via xref:configuration.adoc#part4[device-tree] at `/proc/device-tree/chosen/bootloader/count` ==== `boot_partition` The `boot_partition` variable can be used to select alternate OS files (e.g. `cmdline.txt`) to be loaded, depending on which partition `config.txt` was loaded from after processing xref:config_txt.adoc#autoboot-txt[autoboot.txt]. This is intended for use with an `A/B` boot-system with `autoboot.txt` where it is desirable to be able to have identical files installed to the boot partition for both the `A` and `B` images. The value of the `boot_partition` can be different to the requested `partition` variable if it was overridden by setting `boot_partition` in xref:config_txt.adoc#autoboot-txt[autoboot.txt] or if the specified partition was not bootable and xref:raspberry-pi.adoc#PARTITION_WALK[PARTITION_WALK] was enabled in the EEPROM config. Example `config.txt` - select the matching root filesystem for the `A/B` boot file-system: [source,ini] ---- # Use different cmdline files to point to different root filesystems based on which partition the system booted from. [boot_partition=1] cmdline=cmdline_rootfs_a.txt # Points to root filesystem A [boot_partition=2] cmdline=cmdline_rootfs_b.txt # Points to root filesystem B ---- The value of `boot_partition` i.e. the partition used to boot the OS can be read from xref:configuration.adoc#part4[device-tree] at `/proc/device-tree/chosen/bootloader/partition` ==== `partition` The `partition` variable can be used to select alternate boot flows according to the requested partition number (`sudo reboot N`) or via direct usage of the `PM_RSTS` watchdog register. === The `[tryboot]` filter This filter succeeds if the `tryboot` reboot flag was set. It is intended for use in xref:config_txt.adoc#autoboot-txt[autoboot.txt] to select a different `boot_partition` in `tryboot` mode for fail-safe OS updates. The value of `tryboot` at the start of boot can be read via xref:configuration.adoc#part4[device-tree] at `/proc/device-tree/chosen/bootloader/tryboot` === The `[EDID=*]` filter When switching between multiple monitors while using a single SD card in your Raspberry Pi, and where a blank config isn't sufficient to automatically select the desired resolution for each one, this allows specific settings to be chosen based on the monitors' EDID names. To view the EDID name of an attached monitor, you need to follow a few steps. Run the following command to see which output devices you have on your Raspberry Pi: [source,console] ---- $ ls -1 /sys/class/drm/card?-HDMI-A-?/edid ---- On a Raspberry Pi 4, this will print something like: ---- /sys/class/drm/card1-HDMI-A-1/edid /sys/class/drm/card1-HDMI-A-2/edid ---- You then need to run `edid-decode` against each of these filenames, for example: [source,console] ---- $ edid-decode /sys/class/drm/card1-HDMI-A-1/edid ---- If there's no monitor connected to that particular output device, it'll tell you the EDID was empty; otherwise it will serve you *lots* of information about your monitor's capabilities. You need to look for the lines specifying the `Manufacturer` and the `Display Product Name`. The "EDID name" is then `-`, with any spaces in either string replaced by underscores. For example, if your `edid-decode` output included: ---- .... Vendor & Product Identification: Manufacturer: DEL .... Display Product Name: 'DELL U2422H' .... ---- The EDID name for this monitor would be `DEL-DELL_U2422H`. You can then use this as a conditional-filter to specify settings that only apply when this particular monitor is connected: [source,ini] ---- [EDID=DEL-DELL_U2422H] cmdline=cmdline_U2422H.txt [all] ---- These settings apply only at boot. The monitor must be connected at boot time, and the Raspberry Pi must be able to read its EDID information to find the correct name. Hotplugging a different monitor into the Raspberry Pi after boot will not select different settings. On the Raspberry Pi 4, if both HDMI ports are in use, then the EDID filter will be checked against both of them, and configuration from all matching conditional filters will be applied. NOTE: This setting is not available on Raspberry Pi 5. === The serial number filter Sometimes settings should only be applied to a single specific Raspberry Pi, even if you swap the SD card to a different one. Examples include licence keys and overclocking settings (although the licence keys already support SD card swapping in a different way). You can also use this to select different display settings, even if the EDID identification above is not possible, provided that you don't swap monitors between your Raspberry Pis. For example, if your monitor doesn't supply a usable EDID name, or if you are using composite output (from which EDID cannot be read). To view the serial number of your Raspberry Pi, run the following command: [source,console] ---- $ cat /proc/cpuinfo ---- A 16-digit hex value will be displayed near the bottom of the output. Your Raspberry Pi's serial number is the last eight hex-digits. For example, if you see: ---- Serial : 0000000012345678 ---- The serial number is `12345678`. NOTE: On some Raspberry Pi models, the first 8 hex-digits contain values other than `0`. Even in this case, only use the last eight hex-digits as the serial number. You can define settings that will only be applied to this specific Raspberry Pi: [source,ini] ---- [0x12345678] # settings here apply only to the Raspberry Pi with this serial [all] # settings here apply to all hardware ---- === The GPIO filter You can also filter depending on the state of a GPIO. For example: [source,ini] ---- [gpio4=1] # Settings here apply if GPIO 4 is high [gpio2=0] # Settings here apply if GPIO 2 is low [all] # settings here apply to all hardware ---- === Combine conditional filters Filters of the same type replace each other, so `[pi2]` overrides `[pi1]`, because it is not possible for both to be true at once. Filters of different types can be combined by listing them one after the other, for example: [source,ini] ---- # settings here apply to all hardware [EDID=VSC-TD2220] # settings here apply only if monitor VSC-TD2220 is connected [pi2] # settings here apply only if monitor VSC-TD2220 is connected *and* on a Raspberry Pi 2 [all] # settings here apply to all hardware ---- Use the `[all]` filter to reset all previous filters and avoid unintentionally combining different filter types. --- # Source: gpio.adoc *Note: This file could not be automatically converted from AsciiDoc.* == GPIO control === `gpio` The `gpio` directive allows GPIO pins to be set to specific modes and values at boot time in a way that would previously have needed a custom `dt-blob.bin` file. Each line applies the same settings (or at least makes the same changes) to a set of pins, addressing either a single pin (`3`), a range of pins (`3-4`), or a comma-separated list of either (`3-4,6,8`). The pin set is followed by an `=` and one or more comma-separated attributes from this list: * `ip` - Input * `op` - Output * `a0-a5` - Alt0-Alt5 * `dh` - Driving high (for outputs) * `dl` - Driving low (for outputs) * `pu` - Pull up * `pd` - Pull down * `pn/np` - No pull `gpio` settings apply in order, so those appearing later override those appearing earlier. Examples: [source,ini] ---- # Select Alt2 for GPIO pins 0 to 27 (for DPI24) gpio=0-27=a2 # Set GPIO12 to be an output set to 1 gpio=12=op,dh # Change the pull on (input) pins 18 and 20 gpio=18,20=pu # Make pins 17 to 21 inputs gpio=17-21=ip ---- The `gpio` directive respects the "[...]" conditional filters in `config.txt`, so it is possible to use different settings based on the model, serial number, and EDID. GPIO changes made through this mechanism do not have any direct effect on the kernel. They don't cause GPIO pins to be exported to the `sysfs` interface, and they can be overridden by `pinctrl` entries in the Device Tree as well as utilities like `pinctrl`. Note also that there is a delay of a few seconds between power being applied and the changes taking effect - longer if booting over the network or from a USB mass storage device. === `enable_jtag_gpio` Setting `enable_jtag_gpio=1` selects Alt4 mode for GPIO pins 22-27, and sets up some internal SoC connections, enabling the JTAG interface for the Arm CPU. It works on all models of Raspberry Pi. |=== | Pin # | Function | GPIO22 | `ARM_TRST` | GPIO23 | `ARM_RTCK` | GPIO24 | `ARM_TDO` | GPIO25 | `ARM_TCK` | GPIO26 | `ARM_TDI` | GPIO27 | `ARM_TMS` |=== --- # Source: memory.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Memory options === `total_mem` This parameter can be used to force a Raspberry Pi to limit its memory capacity: specify the total amount of RAM, in megabytes, you wish the Raspberry Pi to use. For example, to make a 4 GB Raspberry Pi 4B behave as though it were a 1 GB model, use the following: [source,ini] ---- total_mem=1024 ---- This value will be clamped between a minimum of 128 MB, and a maximum of the total memory installed on the board. --- # Source: overclocking.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Overclocking options The kernel has a https://www.kernel.org/doc/html/latest/admin-guide/pm/cpufreq.html[CPUFreq] driver with the powersave governor enabled by default, switched to on-demand during boot, when xref:configuration.adoc#raspi-config[raspi-config] is installed. With the on-demand governor, CPU frequency will vary with processor load. You can adjust the minimum values with the `*_min` config options, or disable dynamic clocking by applying a static scaling governor (powersave or performance) or with `force_turbo=1`. Overclocking and overvoltage will be disabled at runtime when the SoC reaches `temp_limit` (see below), which defaults to 85°C, in order to cool down the SoC. You should not hit this limit with Raspberry Pi 1 and Raspberry Pi 2, but you are more likely to with Raspberry Pi 3 and newer. Overclocking and overvoltage are also disabled when an undervoltage situation is detected. NOTE: For more information xref:raspberry-pi.adoc#frequency-management-and-thermal-control[see the section on frequency management and thermal control]. WARNING: Setting any overclocking parameters to values other than those used by xref:configuration.adoc#overclock[`raspi-config`] may set a permanent bit within the SoC. This makes it possible to detect that your Raspberry Pi was once overclocked. The overclock bit sets when `force_turbo` is set to `1` and any of the `over_voltage_*` options are set to a value of more than `0`. See the https://www.raspberrypi.com/news/introducing-turbo-mode-up-to-50-more-performance-for-free/[blog post on Turbo mode] for more information. === Overclocking [cols="1m,3"] |=== | Option | Description | arm_freq | Frequency of the Arm CPU in MHz. | arm_boost | Increases `arm_freq` to the highest supported frequency for the board-type and firmware. Set to `1` to enable. | gpu_freq | Sets `core_freq`, `h264_freq`, `isp_freq`, `v3d_freq` and `hevc_freq` together. | core_freq | Frequency of the GPU processor core in MHz. Influences CPU performance because it drives the L2 cache and memory bus; the L2 cache benefits only Raspberry Pi Zero/Raspberry Pi Zero W/Raspberry Pi 1; and there is a small benefit for SDRAM on Raspberry Pi 2 and Raspberry Pi 3. See section below for use on Raspberry Pi 4. | h264_freq | Frequency of the hardware video block in MHz; individual override of the `gpu_freq` setting. | isp_freq | Frequency of the image sensor pipeline block in MHz; individual override of the `gpu_freq` setting. | v3d_freq | Frequency of the 3D block in MHz; individual override of the `gpu_freq` setting. On Raspberry Pi 5, V3D is independent of `core_freq`, `isp_freq` and `hevc_freq`. | hevc_freq | Frequency of the High Efficiency Video Codec block in MHz; individual override of the `gpu_freq` setting. Raspberry Pi 4 only. | sdram_freq | Frequency of the SDRAM in MHz. SDRAM overclocking on Raspberry Pi 4 or newer is not supported. | over_voltage | CPU/GPU core upper voltage limit. The value should be in the range [-16,8] which equates to the range [0.95V,1.55V] ([0.8,1.4V] on Raspberry Pi 1) with 0.025V steps. In other words, specifying -16 will give 0.95V (0.8V on Raspberry Pi 1) as the maximum CPU/GPU core voltage, and specifying 8 will allow up to 1.55V (1.4V on Raspberry Pi 1). For defaults, see the table below. Values above 6 are only allowed when `force_turbo=1` is specified: this sets the warranty bit if `over_voltage_*` > `0` is also set. | over_voltage_sdram | Sets `over_voltage_sdram_c`, `over_voltage_sdram_i`, and `over_voltage_sdram_p` together. | over_voltage_sdram_c | SDRAM controller voltage adjustment. [-16,8] equates to [0.8V,1.4V] with 0.025V steps. Not supported on Raspberry Pi 4 or later devices. | over_voltage_sdram_i | SDRAM I/O voltage adjustment. [-16,8] equates to [0.8V,1.4V] with 0.025V steps. Not supported on Raspberry Pi 4 or later devices. | over_voltage_sdram_p | SDRAM phy voltage adjustment. [-16,8] equates to [0.8V,1.4V] with 0.025V steps. Not supported on Raspberry Pi 4 or later devices. | force_turbo | Forces turbo mode frequencies even when the Arm cores are not busy. Enabling this may set the warranty bit if `over_voltage_*` is also set. | initial_turbo | Enables https://forums.raspberrypi.com/viewtopic.php?f=29&t=6201&start=425#p180099[turbo mode from boot] for the given value in seconds, or until `cpufreq` sets a frequency. The maximum value is `60`. The November 2024 firmware update made the following changes: * changed the default from `0` to `60` to reduce boot time * switched the kernel CPU performance governor from `powersave` to `ondemand` | arm_freq_min | Minimum value of `arm_freq` used for dynamic frequency clocking. Note that reducing this value below the default does not result in any significant power savings, and is not currently supported. | core_freq_min | Minimum value of `core_freq` used for dynamic frequency clocking. | gpu_freq_min | Minimum value of `gpu_freq` used for dynamic frequency clocking. | h264_freq_min | Minimum value of `h264_freq` used for dynamic frequency clocking. | isp_freq_min | Minimum value of `isp_freq` used for dynamic frequency clocking. | v3d_freq_min | Minimum value of `v3d_freq` used for dynamic frequency clocking. | hevc_freq_min | Minimum value of `hevc_freq` used for dynamic frequency clocking. | sdram_freq_min | Minimum value of `sdram_freq` used for dynamic frequency clocking. | over_voltage_min | Minimum value of `over_voltage` used for dynamic frequency clocking. The value should be in the range [-16,8] which equates to the range [0.8V,1.4V] with 0.025V steps. In other words, specifying -16 will give 0.8V as the CPU/GPU core idle voltage, and specifying 8 will give a minimum of 1.4V. This setting is deprecated on Raspberry Pi 4 and Raspberry Pi 5. | over_voltage_delta | On Raspberry Pi 4 and Raspberry Pi 5 the over_voltage_delta parameter adds the given offset in microvolts to the number calculated by the DVFS algorithm. | temp_limit | Overheat protection. This sets the clocks and voltages to default when the SoC reaches this value in degree Celsius. Values over 85 are clamped to 85. | temp_soft_limit | *3A+/3B+ only*. CPU speed throttle control. This sets the temperature at which the CPU clock speed throttling system activates. At this temperature, the clock speed is reduced from 1400 MHz to 1200 MHz. Defaults to `60`, can be raised to a maximum of `70`, but this may cause instability. | core_freq_fixed | Setting to 1 disables active scaling of the core clock frequency and ensures that any peripherals that use the core clock will maintain a consistent speed. The fixed clock speed is the higher/turbo frequency for the platform in use. Use this in preference to setting specific core_clock frequencies as it provides portability of config files between platforms. |=== This table gives the default values for the options on various Raspberry Pi models, all frequencies are stated in MHz. [cols="m,^,^,^,^,^,^,^,^,^,^"] |=== | Option | Pi Zero W | Pi 1 | Pi 2 | Pi 3 | Pi 3A+/Pi 3B+ | CM4 & Pi 4B <= R1.3 | Pi 4B R1.4 | Pi 400 | Pi Zero 2 W | Pi 5/500/500+ | arm_freq | 1000 | 700 | 900 | 1200 | 1400 | 1500 | 1500 or 1800 if `arm_boost`=1 | 1800 | 1000 | 2400 | core_freq | 400 | 250 | 250 | 400 | 400 | 500 | 500 | 500 | 400 | 910 | h264_freq | 300 | 250 | 250 | 400 | 400 | 500 | 500 | 500 | 300 | N/A | isp_freq | 300 | 250 | 250 | 400 | 400 | 500 | 500 | 500 | 300 | 910 | v3d_freq | 300 | 250 | 250 | 400 | 400 | 500 | 500 | 500 | 300 | 960 | hevc_freq | N/A | N/A | N/A | N/A | N/A | 500 | 500 | 500 | N/A | 910 | sdram_freq | 450 | 400 | 450 | 450 | 500 | 3200 | 3200 | 3200 | 450 | 4267 | arm_freq_min | 700 | 700 | 600 | 600 | 600 | 600 | 600 | 600 | 600 | 1500 | core_freq_min | 250 | 250 | 250 | 250 | 250 | 200 | 200 | 200 | 250 | 500 | gpu_freq_min | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 500 | h264_freq_min | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 250 | N/A | isp_freq_min | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 500 | v3d_freq_min | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 500 | sdram_freq_min | 400 | 400 | 400 | 400 | 400 | 3200 | 3200 | 3200 | 400 | 4267 |=== This table gives defaults for options which are the same across all models. [cols="m,^"] |=== | Option | Default | initial_turbo | 0 (seconds) | temp_limit | 85 (°C) | over_voltage | 0 (1.35V, 1.2V on Raspberry Pi 1) | over_voltage_min | 0 (1.2V) | over_voltage_sdram | 0 (1.2V) | over_voltage_sdram_c | 0 (1.2V) | over_voltage_sdram_i | 0 (1.2V) | over_voltage_sdram_p | 0 (1.2V) |=== The firmware uses Adaptive Voltage Scaling (AVS) to determine the optimum CPU/GPU core voltage in the range defined by `over_voltage` and `over_voltage_min`. [discrete] ==== Specific to Raspberry Pi 4, Raspberry Pi 400 and CM4 The minimum core frequency when the system is idle must be fast enough to support the highest pixel clock (ignoring blanking) of the display(s). Consequently, `core_freq` will be boosted above 500 MHz if the display mode is 4Kp60. |=== | Display option | Max `core_freq` | Default | 500 | `hdmi_enable_4kp60` | 550 |=== NOTE: There is no need to use `hdmi_enable_4kp60` on Flagship models since Raspberry Pi 5, Compute Modules since CM5, and Keyboard models since Pi 500; they support dual-4Kp60 displays by default. * Overclocking requires the latest firmware release. * The latest firmware automatically scales up the voltage if the system is overclocked. Manually setting `over_voltage` disables automatic voltage scaling for overclocking. * It is recommended when overclocking to use the individual frequency settings (`isp_freq`, `v3d_freq` etc) rather than `gpu_freq`, because the maximum stable frequency will be different for ISP, V3D, HEVC etc. * The SDRAM frequency is not configurable on Raspberry Pi 4 or later devices. ==== `force_turbo` By default (`force_turbo=0`) the on-demand CPU frequency driver will raise clocks to their maximum frequencies when the Arm cores are busy, and will lower them to the minimum frequencies when the Arm cores are idle. `force_turbo=1` overrides this behaviour and forces maximum frequencies even when the Arm cores are not busy. === Clocks relationship ==== Raspberry Pi 4 The GPU core, CPU, SDRAM and GPU each have their own PLLs and can have unrelated frequencies. The h264, v3d and ISP blocks share a PLL. To view the Raspberry Pi's current frequency in KHz, type: `cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq`. Divide the result by 1000 to find the value in MHz. Note that this frequency is the kernel _requested_ frequency, and it is possible that any throttling (for example at high temperatures) may mean the CPU is actually running more slowly than reported. An instantaneous measurement of the actual Arm CPU frequency can be retrieved using the vcgencmd `vcgencmd measure_clock arm`. This is displayed in Hertz. === Monitoring core temperature [.whitepaper, title="Cooling a Raspberry Pi device", subtitle="", link=https://pip.raspberrypi.com/documents/RP-003608-WP-Cooling-a-Raspberry-Pi-device.pdf] **** This white paper goes through the reasons why your Raspberry Pi may get hot and why you might want to cool it back down, offering options on the cooling process. **** To view the temperature of a Raspberry Pi, run the following command: [source,console] ---- $ cat /sys/class/thermal/thermal_zone0/temp ---- Divide the result by 1000 to find the value in degrees Celsius. Alternatively, you can use `vcgencmd measure_temp` to report the GPU temperature. Hitting the temperature limit is not harmful to the SoC, but it will cause the CPU to throttle. A heat sink can help to control the core temperature, and therefore performance. This is especially useful if the Raspberry Pi is running inside a case. Airflow over the heat sink will make cooling more efficient. When the core temperature is between 80°C and 85°C, the Arm cores will be throttled back. If the temperature exceeds 85°C, the Arm cores and the GPU will be throttled back. For the Raspberry Pi 3 Model B+, the PCB technology has been changed to provide better heat dissipation and increased thermal mass. In addition, a soft temperature limit has been introduced, with the goal of maximising the time for which a device can "sprint" before reaching the hard limit at 85°C. When the soft limit is reached, the clock speed is reduced from 1.4 GHz to 1.2 GHz, and the operating voltage is reduced slightly. This reduces the rate of temperature increase: we trade a short period at 1.4 GHz for a longer period at 1.2 GHz. By default, the soft limit is 60°C. This can be changed via the `temp_soft_limit` setting in `config.txt`. === Monitoring voltage It is essential to keep the supply voltage above 4.8V for reliable performance. Note that the voltage from some USB chargers/power supplies can fall as low as 4.2V. This is because they are usually designed to charge a 3.7V LiPo battery, not to supply 5V to a computer. To monitor the Raspberry Pi's PSU voltage, you will need to use a multimeter to measure between the VCC and GND pins on the GPIO. More information is available in the xref:raspberry-pi.adoc#power-supply[power] section of the documentation. If the voltage drops below 4.63V (±5%), the Arm cores and the GPU will be throttled back, and a message indicating the low voltage state will be added to the kernel log. The Raspberry Pi 5 PMIC has built in ADCs that allow the supply voltage to be measured. To view the current supply voltage, run the following command: [source,console] ---- $ vcgencmd pmic_read_adc EXT5V_V ---- === Overclocking problems Most overclocking issues show up immediately, when the device fails to boot. If your device fails to boot due to an overclocking configuration change, use the following steps to return your device to a bootable state: . Remove any clock frequency overrides from `config.txt`. . Increase the core voltage using `over_voltage_delta`. . Re-apply overclocking parameters, taking care to avoid the previous known-bad overclocking parameters. --- # Source: video.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Video options === HDMI mode To control HDMI settings, use the xref:configuration.adoc#set-resolution-and-rotation[Screen Configuration utility] or xref:configuration.adoc#set-the-kms-display-mode[KMS video settings] in `cmdline.txt`. ==== HDMI Pipeline for 4-series devices In order to support dual displays and modes up to 4Kp60, Raspberry Pi 4, Compute Module 4, and Pi 400 generate 2 output pixels for every clock cycle. Every HDMI mode has a list of timings that control all the parameters around sync pulse durations. These are typically defined via a pixel clock, and then a number of active pixels, a front porch, sync pulse, and back porch for each of the horizontal and vertical directions. Running everything at 2 pixels per clock means that the 4-series devices cannot support a timing where _any_ of the horizontal timings are not divisible by 2. The firmware and Linux kernel filter out any mode that does not fulfil this criteria. There is only one incompatible mode in the CEA and DMT standards: DMT mode 81, 1366x768 @ 60Hz. This mode has odd-numbered values for the horizontal sync and back porch timings and a width that indivisible by 8. If your monitor has this resolution, 4-series devices automatically drop down to the next mode advertised by the monitor; typically 1280x720. ==== HDMI Pipeline for 5-series devices Flagship models since Raspberry Pi 5, Compute Module models since CM5, and Keyboard models since Pi 500 also work at 2 output pixels per clock cycle. These models have special handling for odd timings and can handle these modes directly. === Composite video mode Composite video output can be found on each model of Raspberry Pi computer: |=== | model | composite output | Raspberry Pi 1 A and B | RCA jack | Raspberry Pi Zero | Unpopulated `TV` header | Raspberry Pi Zero 2 W | Test pads on underside of board | Raspberry Pi 5 | J7 pad next to HDMI socket | All other models | 3.5 mm AV jack |=== NOTE: Composite video output is not available on Keyboard models. ==== `enable_tvout` Set to `1` to enable composite video output and `0` to disable. On Flagship models since Raspberry Pi 4, Compute Modules since CM4, and Zero models, composite output is only available if you set this to `1`, which also disables HDMI output. Composite output is not available on Keyboard models. [%header,cols="1,1"] |=== |Model |Default |Flagship models since Raspberry Pi 4B, Compute Modules since CM4, Keyboard models |0 |All other models |1 |=== On supported models, you must disable HDMI output to enable composite output. HDMI output is disabled when no HDMI display is detected. Set `enable_tvout=0` to prevent composite being enabled when HDMI is disabled. To enable composite output, append `,composite` to the end of the `dtoverlay=vc4-kms-v3d` line in xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`]: [source,ini] ---- dtoverlay=vc4-kms-v3d,composite ---- By default, this outputs composite NTSC video. To choose a different mode, instead append the following to the single line in `/boot/firmware/cmdline.txt`: [source,ini] ---- vc4.tv_norm= ---- Replace the `` placeholder with one of the following values: * `NTSC` * `NTSC-J` * `NTSC-443` * `PAL` * `PAL-M` * `PAL-N` * `PAL60` * `SECAM` === LCD displays and touchscreens ==== `ignore_lcd` By default, the Raspberry Pi Touch Display is used when detected on the I2C bus. `ignore_lcd=1` skips this detection phase. This prevents the LCD display from being used. ==== `disable_touchscreen` Enables and disables the touchscreen. `disable_touchscreen=1` disables the touchscreen component of the official Raspberry Pi Touch Display. === Generic display options ==== `disable_fw_kms_setup` By default, the firmware parses the EDID of any HDMI attached display, picks an appropriate video mode, then passes the resolution and frame rate of the mode (and overscan parameters) to the Linux kernel via settings on the kernel command line. In rare circumstances, the firmware can choose a mode not in the EDID that may be incompatible with the device. Use `disable_fw_kms_setup=1` to disable passing video mode parameters, which can avoid this problem. The Linux video mode system (KMS) instead parses the EDID itself and picks an appropriate mode. NOTE: On Raspberry Pi 5, this parameter defaults to `1`. --- # Source: what_is_config_txt.adoc *Note: This file could not be automatically converted from AsciiDoc.* == What is `config.txt`? Instead of the https://en.wikipedia.org/wiki/BIOS[BIOS] found on a conventional PC, Raspberry Pi devices use a configuration file called `config.txt`. The GPU reads `config.txt` before the Arm CPU and Linux initialise. Raspberry Pi OS looks for this file in the *boot partition*, located at `/boot/firmware/`. NOTE: Prior to Raspberry Pi OS _Bookworm_, Raspberry Pi OS stored the boot partition at `/boot/`. You can edit `config.txt` directly from your Raspberry Pi OS installation. You can also remove the storage device and edit files in the boot partition, including `config.txt`, from a separate computer. Changes to `config.txt` only take effect after a reboot. You can view the current active settings using the following commands: `vcgencmd get_config `:: displays a specific config value, e.g. `vcgencmd get_config arm_freq` `vcgencmd get_config int`:: lists all non-zero integer config options (non-zero) `vcgencmd get_config str`:: lists all non-null string config options NOTE: Not all config settings can be retrieved using `vcgencmd`. Some legacy `config.txt` options are no longer officially supported. These are listed in xref:../computers/legacy_config_txt.adoc[Legacy config.txt options] and aren't included in this article. === File format The `config.txt` file is read by the early-stage boot firmware, so it uses a very simple file format: **a single `property=value` statement on each line, where `value` is either an integer or a string**. Comments may be added, or existing config values may be commented out and disabled, by starting a line with the `#` character. There is a 98-character line length limit for entries. Raspberry Pi OS ignores any characters past this limit. Here is an example file: [source,ini] ---- # Enable audio (loads snd_bcm2835) dtparam=audio=on # Automatically load overlays for detected cameras camera_auto_detect=1 # Automatically load overlays for detected DSI displays display_auto_detect=1 # Enable DRM VC4 V3D driver dtoverlay=vc4-kms-v3d ---- === Advanced features ==== `include` Causes the content of the specified file to be inserted into the current file. For example, adding the line `include extraconfig.txt` to `config.txt` will include the content of `extraconfig.txt` file in the `config.txt` file. [NOTE] ==== The `bootcode.bin` or EEPROM bootloaders do not support the `include` directive. Settings which are handled by the bootloader will only take effect if they are specified in `config.txt` (rather than any additional included file): * `bootcode_delay`, * `gpu_mem`, `gpu_mem_256`, `gpu_mem_512`, `gpu_mem_1024`, * `total_mem`, * `sdram_freq`, * `start_x`, `start_debug`, `start_file`, `fixup_file`, * `uart_2ndstage`. ==== ==== Conditional filtering Conditional filters are covered in the xref:config_txt.adoc#conditional-filters[conditionals section]. --- # Source: config_txt.adoc *Note: This file could not be automatically converted from AsciiDoc.* include::config_txt/what_is_config_txt.adoc[] include::config_txt/autoboot.adoc[] include::config_txt/common.adoc[] include::config_txt/audio.adoc[] include::config_txt/boot.adoc[] include::config_txt/gpio.adoc[] include::config_txt/overclocking.adoc[] include::config_txt/conditional.adoc[] include::config_txt/memory.adoc[] include::config_txt/codeclicence.adoc[] include::config_txt/video.adoc[] include::config_txt/camera.adoc[] --- # Source: audio-config.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Audio Raspberry Pi OS has multiple audio output modes: HDMI 1, the headphone jack (if your device has one), and USB audio. By default, Raspberry Pi OS outputs audio to HDMI 1. If no HDMI output is available, Raspberry Pi OS outputs audio to the headphone jack or a connected USB audio device. === Change audio output Use the following methods to configure audio output in Raspberry Pi OS: [[pro-audio-profile]] [tabs] ====== Desktop volume control:: + Right-click the volume icon on the system tray to open the **audio output selector**. This interface lets you choose an audio output device. Click an audio output device to switch audio output to that device. + You may see a device profile named **Pro Audio** when viewing an audio device in the audio output selector. This profile exposes the maximum number of channels across every audio device, allowing you greater control over the routing of signals. Unless you require fine-tuned control over audio output, use a different device profile. + For more information about the Pro Audio profile, visit https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ#what-is-the-pro-audio-profile[PipeWire's FAQ]. `raspi-config`:: + To change your audio output using xref:configuration.adoc#raspi-config[`raspi-config`], run the following command: + [source,console] ---- $ sudo raspi-config ---- + You should see a configuration screen. Complete the following steps to change your audio output: + . Select `System options` and press `Enter`. + . Select the `Audio` option and press `Enter`. + . Select your required mode and press `Enter` to select that mode. + . Press the right arrow key to exit the options list. Select `Finish` to exit the configuration tool. ====== --- # Source: boot_folder.adoc *Note: This file could not be automatically converted from AsciiDoc.* == `boot` folder contents Raspberry Pi OS stores boot files on the first partition of the SD card, formatted with the FAT file system. On startup, each Raspberry Pi loads various files from the boot partition in order to start up the various processors before the Linux kernel boots. On boot, Linux mounts the boot partition as `/boot/firmware/`. NOTE: Prior to _Bookworm_, Raspberry Pi OS stored the boot partition at `/boot/`. Since _Bookworm_, the boot partition is located at `/boot/firmware/`. === `bootcode.bin` The bootloader, loaded by the SoC on boot. It performs some very basic setup, and then loads one of the `start*.elf` files. The Raspberry Pi 4 and 5 do not use `bootcode.bin`. It has been replaced by boot code in the xref:raspberry-pi.adoc#raspberry-pi-boot-eeprom[onboard EEPROM]. === `start*.elf` Binary firmware blobs loaded onto the VideoCore GPU in the SoC, which then take over the boot process. `start.elf`:: the basic firmware. `start_x.elf`:: includes additional codecs. `start_db.elf`:: used for debugging. `start_cd.elf`:: a cut-down version of the firmware that removes support for hardware blocks such as codecs and 3D as well as debug logging support; it also imposes initial frame buffer limitations. The cut-down firmware is automatically used when `gpu_mem=16` is specified in `config.txt`. `start4.elf`, `start4x.elf`, `start4db.elf` and `start4cd.elf` are equivalent firmware files specific to the Raspberry Pi 4-series (Model 4B, Pi 400, Compute Module 4 and Compute Module 4S). For more information on how to use these files, see the xref:config_txt.adoc#boot-options[`config.txt` documentation]. The Raspberry Pi 5 does not use `elf` files. The firmware is self-contained within the bootloader EEPROM. === `fixup*.dat` Linker files found in matched pairs with the `start*.elf` files listed in the previous section. === `cmdline.txt` The <> passed into the kernel at boot. === `config.txt` Contains many configuration parameters for setting up the Raspberry Pi. For more information, see the xref:config_txt.adoc[`config.txt` documentation]. IMPORTANT: Raspberry Pi 5 requires a non-empty `config.txt` file in the boot partition. === `issue.txt` Text-based housekeeping information containing the date and git commit ID of the distribution. === `initramfs*` Contents of the initial ramdisk. This loads a temporary root file system into memory before the real root file system can be mounted. Since _Bookworm_, Raspberry Pi OS includes an `initramfs` file by default. To enable the initial ramdisk, configure it in xref:config_txt.adoc[`config.txt`] with the xref:config_txt.adoc#auto_initramfs[`auto_initramfs`] keyword. === `ssh` or `ssh.txt` When this file is present, enables SSH at boot. SSH is otherwise disabled by default. The contents do not matter. Even an empty file enables SSH. === Device Tree blob files (`*.dtb`) Device tree blob files contain the hardware definitions of the various models of Raspberry Pi. These files set up the kernel at boot xref:configuration.adoc#part3.1[based on the detected Raspberry Pi model]. === Kernel files (`*.img`) Various xref:linux_kernel.adoc#kernel[kernel] image files that correspond to Raspberry Pi models: |=== | Filename | Processor | Raspberry Pi model | Notes | `kernel.img` | BCM2835 | Pi Zero, Pi 1, CM1 | | `kernel7.img` | BCM2836, BCM2837 | Pi Zero 2 W, Pi 2, Pi 3, CM3, Pi 3+, CM3+ | Later revisions of Pi 2 use BCM2837 | `kernel7l.img` | BCM2711 | Pi 4, CM4, CM4S, Pi 400 | Large Physical Address Extension (LPAE) | `kernel8.img` | BCM2837, BCM2711, BCM2712 | Pi Zero 2 W, Pi 2 (later revisions), Pi 3, CM3, Pi 3+, CM3+, Pi 4, CM4, CM4S, Pi 400, CM5, Pi 5, Pi 500, Pi 500+ | xref:config_txt.adoc#boot-options[64-bit kernel]. Earlier revisions of Raspberry Pi 2 (with BCM2836) do not support 64-bit kernels. | `kernel_2712.img` | BCM2712 | Pi 5, CM5, Pi 500, Pi 500+ | Pi 5-optimized xref:config_txt.adoc#boot-options[64-bit kernel]. |=== NOTE: `lscpu` reports a CPU architecture of `armv7l` for systems running a 32-bit kernel, and `aarch64` for systems running a 64-bit kernel. The `l` in the `armv7l` case refers to little-endian CPU architecture, not `LPAE` as is indicated by the `l` in the `kernel7l.img` filename. === `overlays` folder Contains Device Tree overlays. These are used to configure various hardware devices, such as third-party sound boards. Entries in `config.txt` select these overlays. For more information, see xref:configuration.adoc#part2[Device Trees, overlays and parameters]. --- # Source: configuring-networking.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Networking Raspberry Pi OS provides a graphical user interface (GUI) for setting up wireless connections. Users of Raspberry Pi OS Lite and headless machines can set up wireless networking from the command line with https://networkmanager.dev/docs/api/latest/nmcli.html[`nmcli`]. NOTE: Starting with Raspberry Pi OS _Bookworm_, Network Manager is the default networking configuration tool. Earlier versions of Raspberry Pi OS used `dhcpd` and other tools for network configuration. === Connect to a wireless network ==== via the desktop Access Network Manager via the network icon at the right-hand end of the menu bar. If you are using a Raspberry Pi with built-in wireless connectivity, or if a wireless dongle is plugged in, click this icon to bring up a list of available wireless networks. If you see the message 'No APs found - scanning...', wait a few seconds, and Network Manager should find your network. NOTE: Devices with dual-band wireless automatically disable networking until you assign a wireless LAN country. Flagship models since Raspberry Pi 3B+, Compute Modules since CM4, and Keyboard models support dual-band wireless. To set a wireless LAN country, open the Control Centre application from the **Preferences** menu, select *Localisation* and select your country from the menu. image::images/wifi2.jpg[wifi2] The icons on the right show whether a network is secured or not, and give an indication of signal strength. Click the network that you want to connect to. If the network is secured, a dialogue box will prompt you to enter the network key: image::images/key.jpg[key] Enter the key and click *OK*, then wait a couple of seconds. The network icon will flash briefly to show that a connection is being made. When connected, the icon will stop flashing and show the signal strength. ===== Connect to a hidden network To use a hidden network, navigate to *Advanced options* > *Connect to a hidden Wi-Fi network* in the network menu: image::images/network-hidden.jpg[the connect to a hidden wi-fi network option in advanced options] Then, enter the SSID for the hidden network. Ask your network administrator which type of security your network uses; while most home networks currently use WPA and WPA2 personal security, public networks sometimes use WPA and WPA2 enterprise security. Select the security type for your network, and enter your credentials: image::images/network-hidden-authentication.jpg[hidden wi-fi network authentication] Click the *Connect* button to initiate the network connection. [[wireless-networking-command-line]] ==== via the command line This guide will help you configure a wireless connection on your Raspberry Pi from a terminal without using graphical tools. No additional software is required. NOTE: This guide should work for WEP, WPA, WPA2, or WPA3 networks, but may not work for enterprise networks. ===== Enable wireless networking On a fresh install, you must specify the country where you use your device. This allows your device to choose the correct frequency bands for 5 GHz networking. Once you have specified a wireless LAN country, you can use your Raspberry Pi's built-in wireless networking module. To do this, set your wireless LAN country with the command line `raspi-config` tool. Run the following command: [source,console] ---- $ sudo raspi-config ---- Select the *Localisation options* menu item using the arrow keys. Choose the *WLAN country* option. Pick your country from the dropdown using the arrow keys. Press `Enter` to select your country. You should now have access to wireless networking. Run the following command to check if your Wi-Fi radio is enabled: [source,console] ---- $ nmcli radio wifi ---- If this command returns the text "enabled", you're ready to configure a connection. If this command returns "disabled", try enabling Wi-Fi with the following command: [source,console] ---- $ nmcli radio wifi on ---- ===== Find networks To scan for wireless networks, run the following command: [source,console] ---- $ nmcli dev wifi list ---- You should see output similar to the following: ---- IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY 90:72:40:1B:42:05 myNetwork Infra 132 405 Mbit/s 89 **** WPA2 90:72:42:1B:78:04 myNetwork5G Infra 11 195 Mbit/s 79 *** WPA2 9C:AB:F8:88:EB:0D Pi Towers Infra 1 260 Mbit/s 75 *** WPA2 802.1X B4:2A:0E:64:BD:BE Example Infra 6 195 Mbit/s 37 ** WPA1 WPA2 ---- Look in the "SSID" column for the name of the network you would like to connect to. Use the SSID and a password to connect to the network. ===== Connect to a network Run the following command to configure a network connection, replacing the `` placeholder with the name of the network you're trying to configure: [source,console] ---- $ sudo nmcli --ask dev wifi connect ---- Enter your network password when prompted. Your Raspberry Pi should automatically connect to the network once you enter your password. If you see error output that claims that "Secrets were required, but not provided", you entered an incorrect password. Run the above command again, carefully entering your password. To check if you're connected to a network, run the following command: [source,console] ---- $ nmcli dev wifi list ---- You should see output similar to the following: ---- IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY * 90:72:40:1B:42:05 myNetwork Infra 132 405 Mbit/s 89 **** WPA2 90:72:42:1B:78:04 myNetwork5G Infra 11 195 Mbit/s 79 *** WPA2 9C:AB:F8:88:EB:0D Pi Towers Infra 1 260 Mbit/s 75 *** WPA2 802.1X B4:2A:0E:64:BD:BE Example Infra 6 195 Mbit/s 37 ** WPA1 WPA2 ---- Check for an asterisk (`*`) in the "IN-USE" column; it should appear in the same row as the SSID of the network you intended to connect to. NOTE: You can manually edit your connection configurations in the `/etc/NetworkManager/system-connections/` directory. ===== Connect to an unsecured network If the network you are connecting to does not use a password, run the following command: [source,console] ---- $ sudo nmcli dev wifi connect ---- WARNING: Unsecured wireless networks can put your personal information at risk. Whenever possible, use a secured wireless network or VPN. ===== Connect to a hidden network If you are using a hidden network, specify the "hidden" option with a value of "yes" when you run `nmcli`: [source,console] ---- $ sudo nmcli --ask dev wifi connect hidden yes ---- ===== Set network priority If your device detects more than one known networks at the same time, it could connect any of the detected known networks. Use the priority option to force your Raspberry Pi to prefer certain networks. Your device will connect to the network that is in range with the highest priority. Run the following command to view the priority of known networks: [source,console] ---- $ nmcli --fields autoconnect-priority,name connection ---- You should see output similar to the following: ---- AUTOCONNECT-PRIORITY NAME 0 myNetwork 0 lo 0 Pi Towers 0 Example -999 Wired connection 1 ---- Use the `nmcli connection modify` command to set the priority of a network. The following example command sets the priority of a network named "Pi Towers" to `10`: [source,console] ---- $ nmcli connection modify "Pi Towers" connection.autoconnect-priority 10 ---- Your device will always try to connect to the in-range network with the highest non-negative priority value. You can also assign a network a negative priority; your device will only attempt to connect to a negative priority network if no other known network is in range. For example, consider three networks: ---- AUTOCONNECT-PRIORITY NAME -1 snake 0 rabbit 1 cat 1000 dog ---- * If all of these networks were in range, your device would first attempt to connect to the "dog" network. * If connection to the "dog" network fails, your device would attempt to connect to the "cat" network. * If connection to the "cat" network fails, your device would attempt to connect to the "rabbit" network. * If connection to the "rabbit" network fails, and your device detects no other known networks, your device will attempt to connect to the "snake" network. === Configure DHCP By default, Raspberry Pi OS attempts to automatically configure all network interfaces by DHCP, falling back to automatic private addresses in the range 169.254.0.0/16 if DHCP fails. === Assign a static IP address To allocate a static IP address to your Raspberry Pi, reserve an address for it on your router. Your Raspberry Pi will continue to have its address allocated via DHCP, but will receive the same address each time. A "fixed" address can be allocated by associating the MAC address of your Raspberry Pi with a static IP address in your DHCP server. --- # Source: device-tree.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Device Trees, overlays, and parameters Raspberry Pi kernels and firmware use a Device Tree (DT) to describe hardware. These Device Trees may include DT parameters that to control onboard features. DT overlays allow optional external hardware to be described and configured, and they also support parameters for more control. The firmware loader (`start.elf` and its variants) is responsible for loading the DTB (Device Tree Blob - a machine-readable DT file). It chooses which one to load based on the board revision number, and makes modifications to further tailor it. This runtime customisation avoids the need for many DTBs with only minor differences. User-provided parameters in `config.txt` are scanned, along with any overlays and their parameters, which are then applied. The loader examines the result to learn (for example) which UART, if any, is to be used for the console. Finally it launches the kernel, passing a pointer to the merged DTB. [[part1]] === Device Trees A Device Tree (DT) is a description of the hardware in a system. It should include the name of the base CPU, its memory configuration, and any peripherals (internal and external). A DT should not be used to describe the software, although by listing the hardware modules it does usually cause driver modules to be loaded. NOTE: It helps to remember that DTs are supposed to be OS-neutral, so anything which is Linux-specific shouldn't be there. A Device Tree represents the hardware configuration as a hierarchy of nodes. Each node may contain properties and subnodes. Properties are named arrays of bytes, which may contain strings, numbers (big-endian), arbitrary sequences of bytes, and any combination thereof. By analogy to a filesystem, nodes are directories and properties are files. The locations of nodes and properties within the tree can be described using a path, with slashes as separators and a single slash (`/`) to indicate the root. [[part1.1]] ==== Basic DTS syntax Device Trees are usually written in a textual form known as Device Tree Source (DTS), and are stored in files with a `.dts` suffix. DTS syntax is C-like, with braces for grouping and semicolons at the end of each line. Note that DTS requires semicolons after closing braces: think of C ``struct``s rather than functions. The compiled binary format is referred to as Flattened Device Tree (FDT) or Device Tree Blob (DTB), and is stored in `.dtb` files. The following is a simple tree in the `.dts` format: [source,kotlin] ---- /dts-v1/; /include/ "common.dtsi"; / { node1 { a-string-property = "A string"; a-string-list-property = "first string", "second string"; a-byte-data-property = [0x01 0x23 0x34 0x56]; cousin: child-node1 { first-child-property; second-child-property = <1>; a-string-property = "Hello, world"; }; child-node2 { }; }; node2 { an-empty-property; a-cell-property = <1 2 3 4>; /* each number (cell) is a uint32 */ child-node1 { my-cousin = <&cousin>; }; }; }; /node2 { another-property-for-node2; }; ---- This tree contains: * a required header: `/dts-v1/` * The inclusion of another DTS file, conventionally named `*.dtsi` and analogous to a `.h` header file in C * a single root node: `/` * a couple of child nodes: `node1` and `node2` * some children for node1: `child-node1` and `child-node2` * a label (`cousin`) and a reference to that label (`&cousin`) * several properties scattered through the tree * a repeated node (`/node2`) Properties are simple key-value pairs where the value can either be empty or contain an arbitrary byte stream. While data types are not encoded in the data structure, there are a few fundamental data representations that can be expressed in a Device Tree source file. Text strings (NUL-terminated) are indicated with double quotes: [source,kotlin] ---- string-property = "a string"; ---- Cells are 32-bit unsigned integers delimited by angle brackets: [source,kotlin] ---- cell-property = <0xbeef 123 0xabcd1234>; ---- Arbitrary byte data is delimited with square brackets, and entered in hex: [source,kotlin] ---- binary-property = [01 23 45 67 89 ab cd ef]; ---- Data of differing representations can be concatenated using a comma: [source,kotlin] ---- mixed-property = "a string", [01 23 45 67], <0x12345678>; ---- Commas are also used to create lists of strings: [source,kotlin] ---- string-list = "red fish", "blue fish"; ---- [[part1.2]] ==== An aside about `/include/` The `/include/` directive results in simple textual inclusion, much like C's `#include` directive, but a feature of the Device Tree compiler leads to different usage patterns. Given that nodes are named, potentially with absolute paths, it is possible for the same node to appear twice in a DTS file (and its inclusions). When this happens, the nodes and properties are combined, interleaving and overwriting properties as required (later values override earlier ones). In the example above, the second appearance of `/node2` causes a new property to be added to the original: [source,kotlin] ---- /node2 { an-empty-property; a-cell-property = <1 2 3 4>; /* each number (cell) is a uint32 */ another-property-for-node2; child-node1 { my-cousin = <&cousin>; }; }; ---- It is therefore possible for one `.dtsi` to overwrite, or provide defaults for, multiple places in a tree. [[part1.3]] ==== Labels and references It is often necessary for one part of the tree to refer to another, and there are four ways to do this: Path strings:: Similar to filesystem paths, e.g. `/soc/i2s@7e203000` is the full path to the I2S device in BCM2835 and BCM2836. The standard APIs don't create paths to properties like `/soc/i2s@7e203000/status`: instead, you first find a node, then choose properties of that node. Phandles:: A unique 32-bit integer assigned to a node in its `phandle` property. For historical reasons, you may also see a redundant, matching `linux,phandle`. Phandles are numbered sequentially, starting from 1; 0 is not a valid phandle. They are usually allocated by the DT compiler when it encounters a reference to a node in an integer context, usually in the form of a label. References to nodes using phandles are simply encoded as the corresponding integer (cell) values; there is no markup to indicate that they should be interpreted as phandles, as that is application-defined. Labels:: Just as a label in C gives a name to a place in the code, a DT label assigns a name to a node in the hierarchy. The compiler takes references to labels and converts them into paths when used in string context (`&node`) and phandles in integer context (`<&node>`); the original labels do not appear in the compiled output. Note that labels contain no structure; they are just tokens in a flat, global namespace. Aliases:: Similar to labels, except that they do appear in the FDT output as a form of index. They are stored as properties of the `/aliases` node, with each property mapping an alias name to a path string. Although the aliases node appears in the source, the path strings usually appear as references to labels (`&node`), rather then being written out in full. DT APIs that resolve a path string to a node typically look at the first character of the path, treating paths that do not start with a slash as aliases that must first be converted to a path using the `/aliases` table. [[part1.4]] ==== Device Tree semantics How to construct a Device Tree, and how best to use it to capture the configuration of some hardware, is a large and complex subject. There are many resources available, some of which are listed below, but several points deserve highlighting: * `compatible` properties are the link between the hardware description and the driver software. When an OS encounters a node with a `compatible` property, it looks it up in its database of device drivers to find the best match. In Linux, this usually results in the driver module being automatically loaded, provided it has been appropriately labelled and not blacklisted. * The `status` property indicates whether a device is enabled or disabled. If the `status` is `ok`, `okay` or absent, then the device is enabled. Otherwise, `status` should be `disabled`, so that the device is disabled. It can be useful to place devices in a `.dtsi` file with the status set to `disabled`. A derived configuration can then include that `.dtsi` and set the status for the devices which are needed to `okay`. [[part2]] === Device Tree overlays A modern System on a Chip (SoC) is a very complicated device; a complete Device Tree could be hundreds of lines long. Taking that one step further and placing the SoC on a board with other components only makes matters more complicated. To keep that manageable, particularly if there are related devices which share components, it makes sense to put the common elements in `.dtsi` files, to be included from possibly multiple `.dts` files. When a system like Raspberry Pi also supports optional plug-in accessories such as HATs, the problem grows. Ultimately, each possible configuration requires a Device Tree to describe it, but once you factor in all the different base models and the large number of available accessories, the number of combinations starts to multiply rapidly. What is needed is a way to describe these optional components using a partial Device Tree, and then to be able to build a complete tree by taking a base DT and adding a number of optional elements. You can do this, and these optional elements are called "overlays". Unless you want to learn how to write overlays for Raspberry Pis, you might prefer to skip on to <>. [[part2.1]] ==== Fragments A DT overlay comprises a number of fragments, each of which targets one node and its subnodes. Although the concept sounds simple enough, the syntax seems rather strange at first: [source,kotlin] ---- // Enable the i2s interface /dts-v1/; /plugin/; / { compatible = "brcm,bcm2835"; fragment@0 { target = <&i2s>; __overlay__ { status = "okay"; test_ref = <&test_label>; test_label: test_subnode { dummy; }; }; }; }; ---- The `compatible` string identifies this as being for BCM2835, which is the base architecture for the Raspberry Pi SoCs; if the overlay makes use of features of a Raspberry Pi 4 then `brcm,bcm2711` is the correct value to use, otherwise `brcm,bcm2835` can be used for all Raspberry Pi overlays. Then comes the first (and in this case only) fragment. Fragments should be numbered sequentially from zero. Failure to adhere to this may cause some or all of your fragments to be missed. Each fragment consists of two parts: a `target` property, identifying the node to apply the overlay to; and the `+__overlay__+` itself, the body of which is added to the target node. The example above can be interpreted as if it were written like this: [source,kotlin] ---- /dts-v1/; /plugin/; / { compatible = "brcm,bcm2835"; }; &i2s { status = "okay"; test_ref = <&test_label>; test_label: test_subnode { dummy; }; }; ---- With a sufficiently new version of `dtc` you can write the example exactly as above and get identical output, but some homegrown tools don't understand this format yet. Any overlay that you might want to see included in the standard Raspberry Pi OS kernel should be written in the old format for now. The effect of merging that overlay with a standard Raspberry Pi base Device Tree (e.g. `bcm2708-rpi-b-plus.dtb`), provided the overlay is loaded afterwards, would be to enable the I2S interface by changing its status to `okay`. But if you try to compile this overlay using: [source,console] ---- $ dtc -I dts -O dtb -o 2nd.dtbo 2nd-overlay.dts ---- ...you will get an error: ---- Label or path i2s not found ---- This shouldn't be too unexpected, since there is no reference to the base `.dtb` or `.dts` file to allow the compiler to find the `i2s` label. Trying again, this time using the original example and adding the `-@` option to allow unresolved references (and `-Hepapr` to remove some clutter): [source,console] ---- $ dtc -@ -Hepapr -I dts -O dtb -o 1st.dtbo 1st-overlay.dts ---- If `dtc` returns an error about the third line, it doesn't have the extensions required for overlay work. Run `sudo apt install device-tree-compiler` and try again - this time, compilation should complete successfully. Note that a suitable compiler is also available in the kernel tree as `scripts/dtc/dtc`, built when the `dtbs` make target is used: [source,console] ---- $ make ARCH=arm dtbs ---- Dump the contents of the DTB file to see what the compiler has generated: [source,console] ---- $ fdtdump 1st.dtbo ---- This should output something similar to the following: [source,kotlin] ---- /dts-v1/; // magic: 0xd00dfeed // totalsize: 0x207 (519) // off_dt_struct: 0x38 // off_dt_strings: 0x1c8 // off_mem_rsvmap: 0x28 // version: 17 // last_comp_version: 16 // boot_cpuid_phys: 0x0 // size_dt_strings: 0x3f // size_dt_struct: 0x190 / { compatible = "brcm,bcm2835"; fragment@0 { target = <0xffffffff>; __overlay__ { status = "okay"; test_ref = <0x00000001>; test_subnode { dummy; phandle = <0x00000001>; }; }; }; __symbols__ { test_label = "/fragment@0/__overlay__/test_subnode"; }; __fixups__ { i2s = "/fragment@0:target:0"; }; __local_fixups__ { fragment@0 { __overlay__ { test_ref = <0x00000000>; }; }; }; }; ---- After the verbose description of the file structure there is our fragment. But look carefully - where we wrote `&i2s` it now says `0xffffffff`, a clue that something strange has happened (older versions of dtc might say `0xdeadbeef` instead). The compiler has also added a `phandle` property containing a unique (to this overlay) small integer to indicate that the node has a label, and replaced all references to the label with the same small integer. After the fragment there are three new nodes: * `+__symbols__+` lists the labels used in the overlay (`test_label` here), and the path to the labelled node. This node is the key to how unresolved symbols are dealt with. * `+__fixups__+` contains a list of properties mapping the names of unresolved symbols to lists of paths to cells within the fragments that need patching with the phandle of the target node, once that target has been located. In this case, the path is to the `0xffffffff` value of `target`, but fragments can contain other unresolved references which would require additional fixes. * `+__local_fixups__+` holds the locations of any references to labels that exist within the overlay - the `test_ref` property. This is required because the program performing the merge will have to ensure that phandle numbers are sequential and unique. Back in <> it says that "the original labels do not appear in the compiled output", but this isn't true when using the `-@` switch. Instead, every label results in a property in the `+__symbols__+` node, mapping a label to a path, exactly like the `aliases` node. In fact, the mechanism is so similar that when resolving symbols, the Raspberry Pi loader will search the "aliases" node in the absence of a `+__symbols__+` node. This was useful at one time because providing sufficient aliases allowed very old versions of `dtc` to be used to build the base DTB files, but fortunately that is ancient history now. [[part2.2]] ==== Device Tree parameters To avoid the need for lots of Device Tree overlays, and to reduce the need for users of peripherals to modify DTS files, the Raspberry Pi loader supports a new feature - Device Tree parameters. This permits small changes to the DT using named parameters, similar to the way kernel modules receive parameters from `modprobe` and the kernel command line. Parameters can be exposed by the base DTBs and by overlays, including HAT overlays. Parameters are defined in the DTS by adding an `+__overrides__+` node to the root. It contains properties whose names are the chosen parameter names, and whose values are a sequence comprising a phandle (reference to a label) for the target node, and a string indicating the target property; string, integer (cell) and boolean properties are supported. [[part2.2.1]] ===== String parameters String parameters are declared like this: [source,kotlin] ---- name = <&label>,"property"; ---- where `label` and `property` are replaced by suitable values. String parameters can cause their target properties to grow, shrink, or be created. Note that properties called `status` are treated specially; non-zero/true/yes/on values are converted to the string `"okay"`, while zero/false/no/off becomes `"disabled"`. [[part2.2.2]] ===== Integer parameters Integer parameters are declared like this: [source,kotlin] ---- name = <&label>,"property.offset"; // 8-bit name = <&label>,"property;offset"; // 16-bit name = <&label>,"property:offset"; // 32-bit name = <&label>,"property#offset"; // 64-bit ---- Here, `label`, `property` and `offset` are replaced by suitable values; the offset is specified in bytes relative to the start of the property (in decimal by default), and the preceding separator dictates the size of the parameter. In a change from earlier implementations, integer parameters may refer to non-existent properties or to offsets beyond the end of an existing property. [[part2.2.3]] ===== Boolean parameters Device Tree encodes boolean values as zero-length properties; if present then the property is true, otherwise it is false. They are defined like this: [source,kotlin] ---- boolean_property; // Set 'boolean_property' to true ---- A property is assigned the value `false` by not defining it. Boolean parameters are declared like this, replacing the `label` and `property` placeholders with suitable values: [source,kotlin] ---- name = <&label>,"property?"; ---- Inverted booleans invert the input value before applying it in the same way as a regular boolean; they are declared similarly, but use `!` to indicate the inversion: [source,kotlin] ---- name = <&label>,"!"; ---- Boolean parameters can cause properties to be created or deleted, but they can't delete a property that already exists in the base DTB. [[part2.2.4]] ===== Byte string parameters Byte string properties are arbitrary sequences of bytes, e.g. MAC addresses. They accept strings of hexadecimal bytes, with or without colons between the bytes. [source,kotlin] ---- mac_address = <ðernet0>,"local_mac_address["; ---- The `[` was chosen to match the DT syntax for declaring a byte string: ---- local_mac_address = [aa bb cc dd ee ff]; ---- [[part2.2.5]] ===== Parameters with multiple targets There are some situations where it is convenient to be able to set the same value in multiple locations within the Device Tree. Rather than the ungainly approach of creating multiple parameters, it is possible to add multiple targets to a single parameter by concatenating them, like this: [source,kotlin] ---- __overrides__ { gpiopin = <&w1>,"gpios:4", <&w1_pins>,"brcm,pins:0"; ... }; ---- (example taken from the `w1-gpio` overlay) NOTE: It is even possible to target properties of different types with a single parameter. You could reasonably connect an "enable" parameter to a `status` string, cells containing zero or one, and a proper boolean property. [[part2.2.6]] ===== Literal assignments The DT parameter mechanism allows multiple targets to be patched from the same parameter, but the utility is limited by the fact that the same value has to be written to all locations (except for format conversion and the negation available from inverted booleans). The addition of embedded literal assignments allows a parameter to write arbitrary values, regardless of the parameter value supplied by the user. Assignments appear at the end of a declaration, and are indicated by a `=`: [source,kotlin] ---- str_val = <&target>,"strprop=value"; // 1 int_val = <&target>,"intprop:0=42" // 2 int_val2 = <&target>,"intprop:0=",<42>; // 3 bytes = <&target>,"bytestr[=b8:27:eb:01:23:45"; // 4 ---- Lines 1, 2 and 4 are fairly obvious, but line 3 is more interesting because the value appears as an integer (cell) value. The DT compiler evaluates integer expressions at compile time, which might be convenient (particularly if macro values are used), but the cell can also contain a reference to a label: [source,kotlin] ---- // Force an LED to use a GPIO on the internal GPIO controller. exp_led = <&led1>,"gpios:0=",<&gpio>, <&led1>,"gpios:4"; ---- When the overlay is applied, the label will be resolved against the base DTB in the usual way. It is a good idea to split multi-part parameters over multiple lines like this to make them easier to read - something that becomes more necessary with the addition of cell value assignments. Bear in mind that parameters do nothing unless they are applied - a default value in a lookup table is ignored unless the parameter name is used without assigning a value. [[part2.2.7]] ===== Lookup tables Lookup tables allow parameter input values to be transformed before they are used. They act as associative arrays, rather like switch/case statements: [source,kotlin] ---- phonetic = <&node>,"letter{a=alpha,b=bravo,c=charlie,d,e,='tango uniform'}"; bus = <&fragment>,"target:0{0=",<&i2c0>,"1=",<&i2c1>,"}"; ---- A key with no `=value` means to use the key as the value, an `=` with no key before it is the default value in the case of no match, and starting or ending the list with a comma (or an empty key=value pair anywhere) indicates that the unmatched input value should be used unaltered; otherwise, not finding a match is an error. NOTE: The comma separator within the table string after a cell integer value is implicit - adding one explicitly creates an empty pair (see above). NOTE: As lookup tables operate on input values and literal assignments ignore them, it's not possible to combine the two - characters after the closing `}` in the lookup declaration are treated as an error. [[part2.2.8]] ===== Overlay/fragment parameters The DT parameter mechanism as described has a number of limitations, including the lack of an easy way to create arrays of integers, and the inability to create new nodes. One way to overcome some of these limitations is to conditionally include or exclude certain fragments. A fragment can be excluded from the final merge process (disabled) by renaming the `+__overlay__+` node to `+__dormant__+`. The parameter declaration syntax has been extended to allow the otherwise illegal zero target phandle to indicate that the following string contains operations at fragment or overlay scope. So far, four operations have been implemented: [source,kotlin] ---- + // Enable fragment - // Disable fragment = // Enable fragment if the assigned parameter value is true, otherwise disable it ! // Enable fragment if the assigned parameter value is false, otherwise disable it ---- Examples: [source,kotlin] ---- just_one = <0>,"+1-2"; // Enable 1, disable 2 conditional = <0>,"=3!4"; // Enable 3, disable 4 if value is true, // otherwise disable 3, enable 4. ---- The `i2c-rtc` overlay uses this technique. [[part2.2.9]] ===== Special properties A few property names, when targeted by a parameter, get special handling. One you may have noticed already - `status` - will convert a boolean to either `okay` for true and `disabled` for false. Assigning to the `bootargs` property appends to it rather than overwriting it - this is how settings can be added to the kernel command line. The `reg` property is used to specify device addresses - the location of a memory-mapped hardware block, the address on an I2C bus, etc. The names of child nodes should be qualified with their addresses in hexadecimal, using `@` as a separator: [source,kotlin] ---- bmp280@76 { reg = <0x77>; ... }; ---- When assigning to the `reg` property, the address portion of the parent node name will be replaced with the assigned value. This can be used to prevent a node name clash when using the same overlay multiple times - a technique used by the `i2c-gpio` overlay. The `name` property is a pseudo-property - it shouldn't appear in a DT, but assigning to it causes the name of its parent node to be changed to the assigned value. Like the `reg` property, this can be used to give nodes unique names. [[part2.2.10]] ===== The overlay map file The introduction of the Raspberry Pi 4, built around the BCM2711 SoC, brought with it many changes; some of these changes are additional interfaces, and some are modifications to (or removals of) existing interfaces. There are new overlays intended specifically for the Raspberry Pi 4 that don't make sense on older hardware, e.g. overlays that enable the new SPI, I2C and UART interfaces, but other overlays don't apply correctly even though they control features that are still relevant on the new device. There is therefore a need for a method of tailoring an overlay to multiple platforms with differing hardware. Supporting them all in a single .dtbo file would require heavy use of hidden ("dormant") fragments and a switch to an on-demand symbol resolution mechanism so that a missing symbol that isn't needed doesn't cause a failure. A simpler solution is to add a facility to map an overlay name to one of several implementation files depending on the current platform. The overlay map is a file that gets loaded by the firmware at bootup. It is written in DTS source format - `overlay_map.dts`, compiled to `overlay_map.dtb` and stored in the overlays directory. This is an extract from the current map file (see the https://github.com/raspberrypi/linux/blob/rpi-6.6.y/arch/arm/boot/dts/overlays/overlay_map.dts[full version]): [source,kotlin] ---- / { disable-bt { bcm2835; bcm2711; bcm2712 = "disable-bt-pi5"; }; disable-bt-pi5 { bcm2712; }; uart5 { bcm2711; }; pi3-disable-bt { renamed = "disable-bt"; }; lirc-rpi { deprecated = "use gpio-ir"; }; }; ---- Each node has the name of an overlay that requires special handling. The properties of each node are either platform names or one of a small number of special directives. The overlay map supports the following platform names: * `bcm2835` for all Raspberry Pis built around the BCM2835, BCM2836, BCM2837, and RP3A0 SoCs * `bcm2711` for Raspberry Pi 4B, CM4, CM4S, and Pi 400 * `bcm2712` for Raspberry Pi 5, CM5, Pi 500, and Pi 500+ A platform name with no value (an empty property) indicates that the current overlay is compatible with the platform; for example, `uart5` is compatible with the `bcm2711` platform. A non-empty value for a platform is the name of an alternative overlay to use in place of the requested one; asking for `disable-bt` on BCM2712 results in `disable-bt-pi5` being loaded instead. Any platform not included in an overlay's node is not compatible with that overlay. Any overlay not mentioned in the map is assumed to be compatible with all platforms. The second example node - `disable-bt-pi5` - could be inferred from the content of `disable-bt`, but that intelligence goes into the construction of the file, not its interpretation. The `uart5` overlay only makes sense on BCM2711. In the event that a platform is not listed for an overlay, one of the special directives may apply: * The `renamed` directive indicates the new name of the overlay (which should be largely compatible with the original), but also logs a warning about the rename. * The `deprecated` directive contains a brief explanatory error message which will be logged after the common prefix `+overlay '...' is deprecated:+`. Chaining renames and platform-specific implementations is possible, but be careful to avoid loops! Remember: only exceptions need to be listed - the absence of a node for an overlay means that the default file should be used for all platforms. Accessing diagnostic messages from the firmware is covered in <>. The `dtoverlay` and `dtmerge` utilities have been extended to support the map file: * `dtmerge` extracts the platform name from the compatible string in the base DTB. * `dtoverlay` reads the compatible string from the live Device Tree at `/proc/device-tree`, but you can use the `-p` option to supply an alternate platform name (useful for dry runs on a different platform). They both send errors, warnings and any debug output to STDERR. [[part2.2.11]] ===== Examples Here are some examples of different types of properties, with parameters to modify them: [source,kotlin] ---- / { fragment@0 { target-path = "/"; __overlay__ { test: test_node { string = "hello"; status = "disabled"; bytes = /bits/ 8 <0x67 0x89>; u16s = /bits/ 16 <0xabcd 0xef01>; u32s = /bits/ 32 <0xfedcba98 0x76543210>; u64s = /bits/ 64 < 0xaaaaa5a55a5a5555 0x0000111122223333>; bool1; // Defaults to true // bool2 defaults to false mac = [01 23 45 67 89 ab]; spi = <&spi0>; }; }; }; fragment@1 { target-path = "/"; __overlay__ { frag1; }; }; fragment@2 { target-path = "/"; __dormant__ { frag2; }; }; __overrides__ { string = <&test>,"string"; enable = <&test>,"status"; byte_0 = <&test>,"bytes.0"; byte_1 = <&test>,"bytes.1"; u16_0 = <&test>,"u16s;0"; u16_1 = <&test>,"u16s;2"; u32_0 = <&test>,"u32s:0"; u32_1 = <&test>,"u32s:4"; u64_0 = <&test>,"u64s#0"; u64_1 = <&test>,"u64s#8"; bool1 = <&test>,"bool1!"; bool2 = <&test>,"bool2?"; entofr = <&test>,"english", <&test>,"french{hello=bonjour,goodbye='au revoir',weekend}"; pi_mac = <&test>,"mac[{1=b8273bfedcba,2=b8273b987654}"; spibus = <&test>,"spi:0[0=",<&spi0>,"1=",<&spi1>,"2=",<&spi2>; only1 = <0>,"+1-2"; only2 = <0>,"-1+2"; enable1 = <0>,"=1"; disable2 = <0>,"!2"; }; }; ---- For further examples, a large collection of overlay source files is hosted in the https://github.com/raspberrypi/linux/tree/rpi-6.1.y/arch/arm/boot/dts/overlays[Raspberry Pi Linux GitHub repository]. [[part2.3]] ==== Export labels The overlay handling in the firmware, and the run-time overlay application using the `dtoverlay` utility, treat labels defined in an overlay as being private to that overlay. This avoids the need to invent globally unique names for labels (which keeps them short), and it allows the same overlay to be used multiple times without clashing (provided some tricks are used - see <>). Sometimes it is very useful to be able to create a label with one overlay and use it from another. Firmware released since 14th February 2020 has the ability to declare some labels as being global - the `+__exports__+` node: [source,kotlin] ---- ... public: ... __exports__ { public; // Export the label 'public' to the base DT }; }; ---- When this overlay is applied, the loader strips out all symbols except those that have been exported, in this case `public`, and rewrites the path to make it relative to the target of the fragment containing the label. Overlays loaded after this one can then refer to `&public`. [[part2.4]] ==== Overlay application order Under most circumstances it shouldn't matter in which order the fragments are applied, but for overlays that patch themselves (where the target of a fragment is a label in the overlay, known as an intra-overlay fragment) it becomes important. In older firmware, fragments are applied strictly in order, top to bottom. With firmware released since 14th February 2020, fragments are applied in two passes: * First the fragments that target other fragments are applied and hidden. * Then the regular fragments are applied. This split is particularly important for runtime overlays, since the first step occurs in the `dtoverlay` utility, and the second is performed by the kernel (which can't handle intra-overlay fragments). [[part3]] === Using Device Trees on Raspberry Pi [[part3.1]] ==== DTBs, overlays and `config.txt` On a Raspberry Pi it is the job of the loader (one of the `start.elf` images) to combine overlays with an appropriate base device tree, and then to pass a fully resolved Device Tree to the kernel. The base Device Trees are located alongside `start.elf` in the FAT partition (`/boot/firmware/` from Linux), named `bcm2711-rpi-4-b.dtb`, `bcm2710-rpi-3-b-plus.dtb`, etc. Note that some models (3A+, A, A+) will use the "b" equivalents (3B+, B, B+), respectively. This selection is automatic, and allows the same SD card image to be used in a variety of devices. NOTE: DT and ATAGs are mutually exclusive, and passing a DT blob to a kernel that doesn't understand it will cause a boot failure. The firmware will always try to load the DT and pass it to the kernel, since all kernels since rpi-4.4.y will not function without a DTB. You can override this by adding `device_tree=` in config.txt, which forces the use of ATAGs, which can be useful for simple bare-metal kernels. The loader now supports builds using bcm2835_defconfig, which selects the upstreamed BCM2835 support. This configuration will cause `bcm2835-rpi-b.dtb` and `bcm2835-rpi-b-plus.dtb` to be built. If these files are copied with the kernel, then the loader will attempt to load one of those DTBs by default. In order to manage Device Tree and overlays, the loader supports a number of `config.txt` directives: [source,ini] ---- dtoverlay=acme-board dtparam=foo=bar,level=42 ---- This will cause the loader to look for `overlays/acme-board.dtbo` in the firmware partition, which Raspberry Pi OS mounts on `/boot/firmware/`. It will then search for parameters `foo` and `level`, and assign the indicated values to them. The loader will also search for an attached HAT with a programmed EEPROM, and load the supporting overlay from there - either directly or by name from the "overlays" directory; this happens without any user intervention. There are multiple ways to tell that the kernel is using Device Tree: * The "Machine model:" kernel message during bootup has a board-specific value such as "Raspberry Pi 2 Model B", rather than "BCM2709". * `/proc/device-tree` exists, and contains subdirectories and files that exactly mirror the nodes and properties of the DT. With a Device Tree, the kernel will automatically search for and load modules that support the indicated enabled devices. As a result, by creating an appropriate DT overlay for a device you save users of the device from having to edit `/etc/modules`; all of the configuration goes in `config.txt`, and in the case of a HAT, even that step is unnecessary. Note, however, that layered modules such as `i2c-dev` still need to be loaded explicitly. The flipside is that because platform devices don't get created unless requested by the DTB, it should no longer be necessary to blacklist modules that used to be loaded as a result of platform devices defined in the board support code. In fact, current Raspberry Pi OS images ship with no blacklist files (except for some WLAN devices where multiple drivers are available). [[part3.2]] ==== DT parameters As described above, DT parameters are a convenient way to make small changes to a device's configuration. The current base DTBs support parameters for enabling and controlling the onboard audio, I2C, I2S and SPI interfaces without using dedicated overlays. In use, parameters look like this: [source,ini] ---- dtparam=audio=on,i2c_arm=on,i2c_arm_baudrate=400000,spi=on ---- NOTE: Multiple assignments can be placed on the same line, but ensure you don't exceed the 80-character limit. If you have an overlay that defines some parameters, they can be specified either on subsequent lines like this: [source,ini] ---- dtoverlay=lirc-rpi dtparam=gpio_out_pin=16 dtparam=gpio_in_pin=17 dtparam=gpio_in_pull=down ---- ...or appended to the overlay line like this: [source,ini] ---- dtoverlay=lirc-rpi,gpio_out_pin=16,gpio_in_pin=17,gpio_in_pull=down ---- Overlay parameters are only in scope until the next overlay is loaded. In the event of a parameter with the same name being exported by both the overlay and the base, the parameter in the overlay takes precedence; it's recommended that you avoid doing this. To expose the parameter exported by the base DTB instead, end the current overlay scope using: [source,ini] ---- dtoverlay= ---- [[part3.3]] ==== Board-specific labels and parameters Raspberry Pi boards have two I2C interfaces. These are nominally split: one for the Arm CPU, and one for the VideoCore GPU. On almost all models, `i2c1` belongs to the CPU and `i2c0` to the GPU, where it is used to control the camera and read the HAT EEPROM. However, there are two early revisions of the Model B that have those roles reversed. To make it possible to use one set of overlays and parameters with all Raspberry Pis, the firmware creates some board-specific DT parameters. These are: ---- i2c/i2c_arm i2c_vc i2c_baudrate/i2c_arm_baudrate i2c_vc_baudrate ---- These are aliases for `i2c0`, `i2c1`, `i2c0_baudrate`, and `i2c1_baudrate`. It is recommended that you only use `i2c_vc` and `i2c_vc_baudrate` if you really need to - for example, if you are programming a HAT EEPROM (which is better done using a software I2C bus using the `i2c-gpio` overlay). Enabling `i2c_vc` can stop the Raspberry Pi Camera or Raspberry Pi Touch Display functioning correctly. For people writing overlays, the same aliasing has been applied to the labels on the I2C DT nodes. Thus, you should write: [source,kotlin] ---- fragment@0 { target = <&i2c_arm>; __overlay__ { status = "okay"; }; }; ---- Any overlays using the numeric variants will be modified to use the new aliases. [[part3.4]] ==== HATs and Device Tree A Raspberry Pi HAT is an add-on board with an embedded EEPROM designed for a Raspberry Pi with a 40-pin header. The EEPROM includes any DT overlay required to enable the board (or the name of an overlay to load from the filing system), and this overlay can also expose parameters. The HAT overlay is automatically loaded by the firmware after the base DTB, so its parameters are accessible until any other overlays are loaded, or until the overlay scope is ended using `dtoverlay=`. If for some reason you want to suppress the loading of the HAT overlay, put `dtoverlay=` before any other `dtoverlay` or `dtparam` directive. [[part3.5]] ==== Dynamic Device Tree As of Linux 4.4, Raspberry Pi kernels support the dynamic loading of overlays and parameters. Compatible kernels manage a stack of overlays that are applied on top of the base DTB. Changes are immediately reflected in `/proc/device-tree` and can cause modules to be loaded and platform devices to be created and destroyed. The use of the word "stack" above is important - overlays can only be added and removed at the top of the stack; changing something further down the stack requires that anything on top of it must first be removed. There are some new commands for managing overlays: [[part3.5.1]] ===== The `dtoverlay` command `dtoverlay` is a command line utility that loads and removes overlays while the system is running, as well as listing the available overlays and displaying their help information. Use `dtoverlay -h` to get usage information: ---- Usage: dtoverlay [=...] Add an overlay (with parameters) dtoverlay -D [] Dry-run (prepare overlay, but don't apply - save it as dry-run.dtbo) dtoverlay -r [] Remove an overlay (by name, index or the last) dtoverlay -R [] Remove from an overlay (by name, index or all) dtoverlay -l List active overlays/params dtoverlay -a List all overlays (marking the active) dtoverlay -h Show this usage message dtoverlay -h Display help on an overlay dtoverlay -h .. Or its parameters where is the name of an overlay or 'dtparam' for dtparams Options applicable to most variants: -d Specify an alternate location for the overlays (defaults to /boot/firmware/overlays or /flash/overlays) -v Verbose operation ---- Unlike the `config.txt` equivalent, all parameters to an overlay must be included in the same command line - the <> command is only for parameters of the base DTB. Command variants that change kernel state (adding and removing things) require root privilege, so you may need to prefix the command with `sudo`. Only overlays and parameters applied at run-time can be unloaded - an overlay or parameter applied by the firmware becomes "baked in" such that it won't be listed by `dtoverlay` and can't be removed. [[part3.5.2]] ===== The `dtparam` command `dtparam` creates and loads an overlay that has largely the same effect as using a dtparam directive in `config.txt`. In usage it is largely equivalent to `dtoverlay` with an overlay name of `-`, but there are a few differences: `dtparam` will list the help information for all known parameters of the base DTB. Help on the dtparam command is still available using `dtparam -h`. When indicating a parameter for removal, only index numbers can be used (not names). Not all Linux subsystems respond to the addition of devices at runtime - I2C, SPI and sound devices work, but some won't. [[part3.5.3]] ===== Guidelines for writing runtime-capable overlays The creation or deletion of a device object is triggered by a node being added or removed, or by the status of a node changing from disabled to enabled or vice versa. The absence of a "status" property means the node is enabled. Don't create a node within a fragment that will overwrite an existing node in the base DTB - the kernel will rename the new node to make it unique. If you want to change the properties of an existing node, create a fragment that targets it. ALSA doesn't prevent its codecs and other components from being unloaded while they are in use. Removing an overlay can cause a kernel exception if it deletes a codec that is still being used by a sound card. Experimentation found that devices are deleted in the reverse of fragment order in the overlay, so placing the node for the card after the nodes for the components allows an orderly shutdown. [[part3.5.4]] ===== Caveats The loading of overlays at runtime is a recent addition to the kernel, and at the time of writing there is no accepted way to do this from userspace. By hiding the details of this mechanism behind commands, users are insulated from changes in the event that a different kernel interface becomes standardised. * Some overlays work better at run-time than others. Parts of the Device Tree are only used at boot time - changing them using an overlay will not have any effect. * Applying or removing some overlays may cause unexpected behaviour, so it should be done with caution. This is one of the reasons it requires `sudo`. * Unloading the overlay for an ALSA card can stall if something is actively using ALSA - the LXPanel volume slider plugin demonstrates this effect. To enable overlays for sound cards to be removed, the `lxpanelctl` utility has been given two new options - `alsastop` and `alsastart` - and these are called from the auxiliary scripts `dtoverlay-pre` and `dtoverlay-post` before and after overlays are loaded or unloaded, respectively. * Removing an overlay will not cause a loaded module to be unloaded, but it may cause the reference count of some modules to drop to zero. Running `rmmod -a` twice will cause unused modules to be unloaded. * Overlays have to be removed in reverse order. The commands will allow you to remove an earlier one, but all the intermediate ones will be removed and re-applied, which may have unintended consequences. * Only Device Tree nodes at the top level of the tree and children of a bus node will be probed. For nodes added at run-time there is the further limitation that the bus must register for notifications of the addition and removal of children. However, there are exceptions that break this rule and cause confusion: the kernel explicitly scans the entire tree for some device types - clocks and interrupt controller being the two main ones - in order to (for clocks) initialise them early and/or (for interrupt controllers) in a particular order. This search mechanism only happens during booting and so doesn't work for nodes added by an overlay at run-time. It is therefore recommended for overlays to place fixed-clock nodes in the root of the tree unless it is guaranteed that the overlay will not be used at run-time. [[part3.6]] ==== Supported overlays and parameters For a list of supported overlays and parameters, see the https://github.com/raspberrypi/firmware/blob/master/boot/overlays/README[README] file found alongside the overlay `.dtbo` files in `/boot/firmware/overlays`. It is kept up-to-date with additions and changes. [[part4]] === Firmware parameters The firmware uses the special https://www.kernel.org/doc/html/latest/devicetree/usage-model.html#runtime-configuration[/chosen] node to pass parameters between the bootloader and/or firmware and the operating system. * Each property is stored as a 32-bit unsigned integer unless indicated otherwise. * Numbers in device-tree are stored in binary and are big-endian. Example shell command for reading a 32-bit unsigned integer property: [source,console] ---- printf "%d" "0x$(od "/proc/device-tree/chosen/bootloader/partition" -v -An -t x1 | tr -d ' ' )" ---- `overlay_prefix`:: _(string)_ The xref:config_txt.adoc#overlay_prefix[overlay_prefix] string selected by `config.txt`. `os_prefix`:: _(string)_ The xref:config_txt.adoc#os_prefix[os_prefix] string selected by `config.txt`. `rpi-boardrev-ext`:: The extended board revision code from xref:raspberry-pi.adoc#otp-register-and-bit-definitions[OTP row 33]. `rpi-country-code`:: The country code used used by https://github.com/raspberrypi-ui/piwiz[PiWiz]. Keyboard models only. `rpi-duid`:: _(string)_ Raspberry Pi 5 only. A string representation of the QR code on the PCB. `rpi-serial64`:: _(string)_ A string representation of the 64-bit serial number. On flagship models since Raspberry Pi 5 this is same as the normal serial number (`/proc/device-tree/serial-number`). On earlier models the default serial number is still 32-bit but with newer firmware a 64-bit serial number is now available and is visible through this node. ==== Common bootloader properties `/chosen/bootloader` `boot-mode`:: The boot-mode used to load the kernel. See the xref:raspberry-pi.adoc#BOOT_ORDER[BOOT_ORDER] documentation for a list of possible boot-mode values. `partition`:: The partition number used during boot. If a `boot.img` ramdisk is loaded then this refers to partition that the ramdisk was loaded from rather than the partition number within the ramdisk. `pm_rsts`:: The value of the `PM_RSTS` register during boot. `tryboot`:: Set to `1` if the `tryboot` flag was set at boot. ==== Boot variables `/chosen/bootloader` Raspberry Pi 5 only. `arg1`:: The value of the user defined reboot argument from the previous boot. See xref:config_txt.adoc#boot_arg1[boot_arg1] `count`:: The value of the 8-bit `boot_count` variable when the OS was started. See xref:config_txt.adoc#boot_count[boot_count] ==== Power supply properties `/chosen/power` Raspberry Pi 5 only. `max_current`:: The maximum current in mA that the power supply can supply. The firmware reports the value indicated by the USB-C, USB-PD or PoE interfaces. For bench power supplies (e.g. connected to the GPIO header) define `PSU_MAX_CURRENT` in the bootloader configuration to indicate the power supply current capability. `power_reset`:: Raspberry Pi 5 only. A bit field indicating the reason why the PMIC was reset. |=== | Bit | Reason | 0 | Over voltage | 1 | Under voltage | 2 | Over temperature | 3 | Enable signal | 4 | Watchdog |=== `rpi_power_supply`:: _(two 32-bit integers)_ The USB VID and Product VDO of the official Raspberry Pi 27 W power supply (if connected). `usb_max_current_enable`:: Zero if the USB port current limiter was set to the low-limit during boot; or non-zero if the high limit was enabled. The high level is automatically enabled if the power supply claims 5A max-current OR `usb_max_current_enable=1` is forced in `config.txt` `usb_over_current_detected`:: Non-zero if a USB over-current event occurred during USB boot. `usbpd_power_data_objects`:: _(binary blob containing multiple 32-bit integers)_ The raw binary USB-PD objects (fixed supply only) received by the bootloader during USB-PD negotiation. To capture this for a bug report, run `hexdump -C /proc/device-tree/chosen/power/usbpd_power_data_objects`. The format is defined by the https://usb.org/document-library/usb-power-delivery[USB Power Delivery] specification. ==== BCM2711 and BCM2712 specific bootloader properties `/chosen/bootloader` The following properties are specific to the BCM2711 and BCM2712 SPI EEPROM bootloaders. `build_timestamp`:: The UTC build time for the EEPROM bootloader. `capabilities`:: This bit-field describes the features supported by the current bootloader. This may be used to check whether a feature (e.g. USB boot) is supported before enabling it in the bootloader EEPROM config. |=== | Bit | Feature | 0 | xref:raspberry-pi.adoc#usb-mass-storage-boot[USB boot] using the VLI USB host controller | 1 | xref:remote-access.adoc#network-boot-your-raspberry-pi[Network boot] | 2 | xref:raspberry-pi.adoc#fail-safe-os-updates-tryboot[TRYBOOT_A_B] mode | 3 | xref:raspberry-pi.adoc#fail-safe-os-updates-tryboot[TRYBOOT] | 4 | xref:raspberry-pi.adoc#usb-mass-storage-boot[USB boot] using the BCM2711 USB host controller | 5 | xref:config_txt.adoc#boot_ramdisk[RAM disk - boot.img] | 6 | xref:raspberry-pi.adoc#nvme-ssd-boot[NVMe boot] | 7 | https://github.com/raspberrypi/usbboot/blob/master/Readme.md#secure-boot[Secure Boot] |=== `update_timestamp`:: The UTC update timestamp set by `rpi-eeprom-update`. `signed`:: If Secure Boot is enabled, this bit-field will be non-zero. The individual bits indicate the current Secure Boot configuration. |=== | Bit | Description | 0 | `SIGNED_BOOT` was defined in the EEPROM config file. | 1 | Reserved | 2 | The ROM development key has been revoked. See xref:config_txt.adoc#revoke_devkey[revoke_devkey]. | 3 | The customer public key digest has been written to OTP. See xref:config_txt.adoc#program_pubkey[program_pubkey]. | 4...31 | Reserved |=== `version`:: _(string)_ The Git version string for the bootloader. ==== BCM2711 and BCM2712 USB boot properties `/chosen/bootloader/usb` The following properties are defined if the system was booted from USB. These may be used to uniquely identify the USB boot device. `usb-version`:: The USB major protocol version (2 or 3). `route-string`:: The USB route-string identifier for the device as defined by the USB 3.0 specification. `root-hub-port-number`:: The root hub port number that the boot device is connected to - possibly via other USB hubs. `lun`:: The Logical Unit Number for the mass-storage device. ==== NVMEM nodes The firmware provides read-only, in-memory copies of portions of the bootloader EEPROM via the https://www.kernel.org/doc/html/latest/driver-api/nvmem.html[NVMEM] subsystem. Each region appears as an NVMEM device under `/sys/bus/nvmem/devices/` with a named alias under `/sys/firmware/devicetree/base/aliases`. Example shell script code for reading an NVMEM mode from https://github.com/raspberrypi/rpi-eeprom/blob/master/rpi-eeprom-update[rpi-eeprom-update]: [source,shell] ---- blconfig_alias="/sys/firmware/devicetree/base/aliases/blconfig" blconfig_nvmem_path="" if [ -f "${blconfig_alias}" ]; then blconfig_ofnode_path="/sys/firmware/devicetree/base"$(strings "${blconfig_alias}")"" blconfig_ofnode_link=$(find -L /sys/bus/nvmem -samefile "${blconfig_ofnode_path}" 2>/dev/null) if [ -e "${blconfig_ofnode_link}" ]; then blconfig_nvmem_path=$(dirname "${blconfig_ofnode_link}") fi fi fi ---- `blconfig`:: alias that refers to an NVMEM device that stores a copy of the bootloader EEPROM config file. `blpubkey`:: alias that points to an NVMEM device that stores a copy of the bootloader EEPROM public key (if defined) in binary format. The https://github.com/raspberrypi/usbboot/blob/master/tools/rpi-bootloader-key-convert[rpi-bootloader-key-convert] utility can be used to convert the data into PEM format for use with OpenSSL. For more information, see https://github.com/raspberrypi/usbboot#secure-boot[secure-boot]. [[part5]] === Troubleshooting [[part5.1]] ==== Debugging The loader will skip over missing overlays and bad parameters, but if there are serious errors, such as a missing or corrupt base DTB or a failed overlay merge, then the loader will fall back to a non-DT boot. If this happens, or if your settings don't behave as you expect, it is worth checking for warnings or errors from the loader: [source,console] ---- $ sudo vclog --msg ---- Extra debugging can be enabled by adding `dtdebug=1` to `config.txt`. You can create a human-readable representation of the current state of DT like this: [source,console] ---- $ dtc -I fs /proc/device-tree ---- This can be useful to see the effect of merging overlays onto the underlying tree. If kernel modules don't load as expected, check that they aren't blacklisted in `/etc/modprobe.d/raspi-blacklist.conf`; blacklisting shouldn't be necessary when using Device Tree. If that shows nothing untoward, you can also check that the module is exporting the correct aliases by searching `/lib/modules//modules.alias` for the `compatible` value. Otherwise, your driver is probably missing either: ---- .of_match_table = xxx_of_match, ---- or: ---- MODULE_DEVICE_TABLE(of, xxx_of_match); ---- Failing that, `depmod` has failed or the updated modules haven't been installed on the target filesystem. [[part5.2]] ==== Test overlays using `dtmerge`, `dtdiff` and `ovmerge` Alongside the `dtoverlay` and `dtparam` commands is a utility for applying an overlay to a DTB - `dtmerge`. To use it you first need to obtain your base DTB, which can be obtained in one of two ways: Generate it from the live DT state in `/proc/device-tree`: [source,console] ---- $ dtc -I fs -O dtb -o base.dtb /proc/device-tree ---- This will include any overlays and parameters you have applied so far, either in `config.txt` or by loading them at runtime, which may or may not be what you want. Alternatively: Copy it from the source DTBs in `/boot/firmware/`. This won't include overlays and parameters, but it also won't include any other modifications by the firmware. To allow testing of all overlays, the `dtmerge` utility will create some of the board-specific aliases ("i2c_arm", etc.), but this means that the result of a merge will include more differences from the original DTB than you might expect. The solution to this is to use dtmerge to make the copy: [source,console] ---- $ dtmerge /boot/firmware/bcm2710-rpi-3-b.dtb base.dtb - ---- (the `-` indicates an absent overlay name). You can now try applying an overlay or parameter: [source,console] ---- $ dtmerge base.dtb merged.dtb - sd_overclock=62 $ dtdiff base.dtb merged.dtb ---- which will return: [source,diff] ---- --- /dev/fd/63 2016-05-16 14:48:26.396024813 +0100 +++ /dev/fd/62 2016-05-16 14:48:26.396024813 +0100 @@ -594,7 +594,7 @@ }; sdhost@7e202000 { - brcm,overclock-50 = <0x0>; + brcm,overclock-50 = <0x3e>; brcm,pio-limit = <0x1>; bus-width = <0x4>; clocks = <0x8>; ---- You can also compare different overlays or parameters. [source,console] ---- $ dtmerge base.dtb merged1.dtb /boot/firmware/overlays/spi1-1cs.dtbo $ dtmerge base.dtb merged2.dtb /boot/firmware/overlays/spi1-2cs.dtbo $ dtdiff merged1.dtb merged2.dtb ---- to get: [source,diff] ---- --- /dev/fd/63 2016-05-16 14:18:56.189634286 +0100 +++ /dev/fd/62 2016-05-16 14:18:56.189634286 +0100 @@ -453,7 +453,7 @@ spi1_cs_pins { brcm,function = <0x1>; - brcm,pins = <0x12>; + brcm,pins = <0x12 0x11>; phandle = <0x3e>; }; @@ -725,7 +725,7 @@ #size-cells = <0x0>; clocks = <0x13 0x1>; compatible = "brcm,bcm2835-aux-spi"; - cs-gpios = <0xc 0x12 0x1>; + cs-gpios = <0xc 0x12 0x1 0xc 0x11 0x1>; interrupts = <0x1 0x1d>; linux,phandle = <0x30>; phandle = <0x30>; @@ -743,6 +743,16 @@ spi-max-frequency = <0x7a120>; status = "okay"; }; + + spidev@1 { + #address-cells = <0x1>; + #size-cells = <0x0>; + compatible = "spidev"; + phandle = <0x41>; + reg = <0x1>; + spi-max-frequency = <0x7a120>; + status = "okay"; + }; }; spi@7e2150C0 { ---- The https://github.com/raspberrypi/utils[Utils] repo includes another DT utility - `ovmerge`. Unlike `dtmerge`, `ovmerge` combines file and applies overlays in source form. Because the overlay is never compiled, labels are preserved and the result is usually more readable. It also has a number of other tricks, such as the ability to list the order of file inclusion. [[part5.3]] ==== Force a specific Device Tree If you have very specific needs that aren't supported by the default DTBs, or if you just want to experiment with writing your own DTs, you can tell the loader to load an alternate DTB file like this: [source,ini] ---- device_tree=my-pi.dtb ---- [[part5.4]] ==== Disable Device Tree usage Device Tree usage is required in Raspberry Pi Linux kernels. For bare metal and other OSs, DT usage can be disabled by adding: [source,ini] ---- device_tree= ---- to `config.txt`. [[part5.5]] ==== Shortcuts and syntax variants The loader understands a few shortcuts: [source,ini] ---- dtparam=i2c_arm=on dtparam=i2s=on ---- can be shortened to: [source,ini] ---- dtparam=i2c,i2s ---- (`i2c` is an alias of `i2c_arm`, and the `=on` is assumed). It also still accepts the long-form versions: `device_tree_overlay` and `device_tree_param`. [[part5.6]] ==== Other DT commands available in `config.txt` `device_tree_address`:: This is used to override the address where the firmware loads the device tree (not dt-blob). By default the firmware will choose a suitable place. `device_tree_end`:: This sets an (exclusive) limit to the loaded device tree. By default the device tree can grow to the end of usable memory, which is almost certainly what is required. `dtdebug`:: If non-zero, turn on some extra logging for the firmware's device tree processing. `enable_uart`:: Enable the xref:configuration.adoc#primary-and-secondary-uart[primary/console UART]. If the primary UART is `ttyAMA0`, `enable_uart` defaults to 1 (enabled), otherwise it defaults to 0 (disabled). This stops the core frequency from changing, which would make `ttyS0` unusable. As a result, `enable_uart=1` implies `core_freq=250` (unless `force_turbo=1`). In some cases this is a performance hit, so it is off by default. `overlay_prefix`:: Specifies a subdirectory/prefix from which to load overlays - defaults to "overlays/". Note the trailing "/". If desired you can add something after the final "/" to add a prefix to each file, although this is not likely to be needed. Further ports can be controlled by the DT. For more details see <>. [[part5.7]] ==== Further help If you've read through this document and have not found the answer to a Device Tree problem, there is help available. The author can usually be found on Raspberry Pi forums, particularly the https://forums.raspberrypi.com/viewforum.php?f=107[Device Tree] forum. --- # Source: display-resolution.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Displays To configure your Raspberry Pi to use a non-default display mode, set the resolution or rotation manually. === Support for HDMI monitors With most HDMI monitors, Raspberry Pi OS uses the highest resolution and refresh rate supported by the monitor. The Raspberry Pi Zero, Zero W and Zero 2 W have a mini HDMI port, so you need a mini-HDMI-to-full-size-HDMI lead or adapter. Flagship models since Raspberry Pi 4B and Keyboard models have two micro HDMI ports, so you need a micro-HDMI-to-full-size-HDMI lead or adapter for each display you wish to attach. Connect the cables before turning on the Raspberry Pi. Flagship models since Raspberry Pi 4B, Compute Modules since CM4 (except for CM4S), and Keyboard models can drive up to two displays. 4-series devices support resolutions up to 1080p at a 60Hz refresh rate, or two 4K displays at a 30Hz refresh rate. You can also drive a single display at 4K with a 60Hz refresh rate if you connect the display to the `HDMI0` port and set the `hdmi_enable_4kp60=1` flag in xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`]. 5-series devices support up to two displays at 4K resolution at a 60hz refresh rate with no additional configuration. === Set resolution and rotation On the Raspberry Pi Desktop, open the *Preferences* menu and select the **Screen Configuration** utility. You should see a graphical representation of the displays connected to the Raspberry Pi. Right click on the display you wish to modify, and select an option. Click **Apply** to and close **Screen Configuration** to save your changes. Alternatively, use the following command to open the **Screen Configuration** utility: [source,console] ---- $ raindrop ---- [TIP] ==== If your installation of Raspberry Pi OS doesn't already include `raindrop`, you can install it with the following command: [source,console] ---- $ sudo apt install raindrop ---- Older versions of Raspberry Pi OS used a different screen configuration utility named `arandr`. To uninstall `arandr`, run the following command: [source,console] ---- $ sudo apt purge arandr ---- ==== === Manually set resolution and rotation ==== Determine display device name To manually configure resolution and rotation, you'll need to know the names of your display devices. To determine the device names, run the following command to display information about attached devices: [source,console] ---- $ kmsprint | grep Connector ---- ==== Set a custom resolution To set a custom resolution, use our Screen Configuration tool, `raindrop`. If your Raspberry Pi OS installation doesn't already include `raindrop` (for instance, if you're still using the previous Screen Configuration tool, `arandr`), you can download `raindrop` from `apt` or the Recommended Software GUI. ==== Set a custom rotation To set a custom resolution, use our Screen Configuration tool, `raindrop`. If your Raspberry Pi OS installation doesn't already include `raindrop` (for instance, if you're still using the previous Screen Configuration tool, `arandr`), you can download `raindrop` from `apt` or the Recommended Software GUI. If you run the Wayland desktop compositor, you can set a custom display rotation with `wlr-randr`. The following commands rotate the display by 0°, 90°, 180°, and 270°: [source,console] ---- $ wlr-randr --output HDMI-A-1 --transform normal $ wlr-randr --output HDMI-A-1 --transform 90 $ wlr-randr --output HDMI-A-1 --transform 180 $ wlr-randr --output HDMI-A-1 --transform 270 ---- The `--output` option specifies the device to be rotated. NOTE: To run this command over SSH, add the following prefix: `WAYLAND_DISPLAY=wayland-1`, e.g. `WAYLAND_DISPLAY=wayland-1 wlr-randr --output HDMI-A-1 --transform 90`. You can also use one of the following `--transform` options to mirror the display at the same time as rotating it: `flipped`, `flipped-90`, `flipped-180`, `flipped-270`. === Console resolution and rotation To change the resolution and rotation of your Raspberry Pi in console mode, use the KMS settings. For more information, see <>. NOTE: When using console mode with multiple displays, all connected displays share the same rotation settings. --- # Source: external-storage.adoc *Note: This file could not be automatically converted from AsciiDoc.* == External storage You can connect your external hard disk, SSD, or USB stick to any of the USB ports on the Raspberry Pi, and mount the file system to access the data stored on it. By default, your Raspberry Pi automatically mounts some of the popular file systems such as FAT, NTFS, and HFS+ at the `/media/pi/` location. NOTE: Raspberry Pi OS Lite does not implement automounting. To set up your storage device so that it always mounts to a specific location of your choice, you must mount it manually. === Mount a storage device You can mount your storage device at a specific folder location. It is conventional to do this within the `/mnt` folder, for example `/mnt/mydisk`. Note that the folder must be empty. Plug the storage device into a USB port on the Raspberry Pi, and list all the disk partitions on the Raspberry Pi using the following command: [source,console] ---- $ sudo lsblk -o UUID,NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL,MODEL ---- The Raspberry Pi uses mount points `/` and `/boot/firmware/`. Your storage device will show up in this list, along with any other connected storage. Use the SIZE, LABEL, and MODEL columns to identify the name of the disk partition that points to your storage device. For example, `sda1`. The FSTYPE column contains the filesystem type. If your storage device uses an exFAT file system, install the exFAT driver: [source,console] ---- $ sudo apt update $ sudo apt install exfat-fuse ---- If your storage device uses an NTFS file system, you will have read-only access to it. If you want to write to the device, you can install the ntfs-3g driver: [source,console] ---- $ sudo apt update $ sudo apt install ntfs-3g ---- Run the following command to get the location of the disk partition: [source,console] ---- $ sudo blkid ---- For example, `/dev/sda1`. Create a target folder to be the mount point of the storage device. The mount point name used in this case is `mydisk`. You can specify a name of your choice: [source,console] ---- $ sudo mkdir /mnt/mydisk ---- Mount the storage device at the mount point you created: [source,console] ---- $ sudo mount /dev/sda1 /mnt/mydisk ---- Verify that the storage device is mounted successfully by listing the contents: [source,console] ---- $ ls /mnt/mydisk ---- === Automatically mount a storage device You can modify the `fstab` file to define the location where the storage device will be automatically mounted when the Raspberry Pi starts up. In the `fstab` file, the disk partition is identified by the universally unique identifier (UUID). Get the UUID of the disk partition: [source,console] ---- $ sudo blkid ---- Find the disk partition from the list and note the UUID. (For example, `5C24-1453`.) Open the fstab file using a command line editor such as nano: [source,console] ---- $ sudo nano /etc/fstab ---- Add the following line in the `fstab` file: [source,bash] ---- UUID=5C24-1453 /mnt/mydisk fstype defaults,auto,users,rw,nofail 0 0 ---- Replace `fstype` with the type of your file system, which you found when you went through the steps above, for example: `ntfs`. If the filesystem type is FAT or NTFS, add `,umask=000` immediately after `nofail` - this will allow all users full read/write access to every file on the storage device. Now that you have set an entry in `fstab`, you can start up your Raspberry Pi with or without the storage device attached. Before you unplug the device you must either shut down the Raspberry Pi, or manually unmount it. NOTE: If you do not have the storage device attached when the Raspberry Pi starts, it will take an extra 90 seconds to start up. You can shorten this by adding `,x-systemd.device-timeout=30` immediately after `nofail`. This will change the timeout to 30 seconds, meaning the system will only wait 30 seconds before giving up trying to mount the disk. For more information on each Linux command, refer to the specific manual page using the `man` command. For example, `man fstab`. === Unmount a storage device When the Raspberry Pi shuts down, the system takes care of unmounting the storage device so that it is safe to unplug it. If you want to manually unmount a device, you can use the following command: [source,console] ---- $ sudo umount /mnt/mydisk ---- If you receive an error that the 'target is busy', this means that the storage device was not unmounted. If no error was displayed, you can now safely unplug the device. ==== Dealing with 'target is busy' The 'target is busy' message means there are files on the storage device that are in use by a program. To close the files, use the following procedure. Close any program which has open files on the storage device. If you have a terminal open, make sure that you are not in the folder where the storage device is mounted, or in a sub-folder of it. If you are still unable to unmount the storage device, you can use the `lsof` tool to check which program has files open on the device. You need to first install `lsof` using `apt`: [source,console] ---- $ sudo apt update $ sudo apt install lsof ---- To use lsof: [source,console] ---- $ lsof /mnt/mydisk ---- --- # Source: headless.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[setting-up-a-headless-raspberry-pi]] == Set up a headless Raspberry Pi A **headless** Raspberry Pi runs without a monitor, keyboard, or mouse. To run a Raspberry Pi headless, you need a way to access it from another computer. To access your Raspberry Pi remotely, you'll need to connect your Raspberry Pi to a network, and a way to access the Raspberry Pi over that network. To connect your Raspberry Pi to a network, you can either plug your device into a wired connection via Ethernet or configure wireless networking. To access your Raspberry Pi over that network, use SSH. Once you've connected over SSH, you can use `raspi-config` to xref:remote-access.adoc#vnc[enable VNC] if you'd prefer a graphical desktop environment. If you're setting up your Raspberry Pi from scratch, set up wireless networking and SSH during the xref:getting-started.adoc#installing-the-operating-system[imaging process]. If you've already got a Raspberry Pi set up, you can configure SSH using `raspi-config`. WARNING: Depending on the model of Raspberry Pi and type of SD card you use, your Raspberry Pi may require up to five minutes to boot and connect to your wireless network the first time it boots. === Connect to a wired network To connect to a wired network at first boot, plug your headless Raspberry Pi in via Ethernet, or use an Ethernet adapter if your Raspberry Pi model does not include an Ethernet port. Your Raspberry Pi will automatically connect to the network. === Connect to a wireless network To configure wireless network access at first boot for a headless Raspberry Pi, enter the network information in the **Customisation > Wi-Fi** tab in Raspberry Pi Imager. Enter the SSID and password of your preferred wireless network. Your Raspberry Pi uses these credentials to connect to the network on first boot. Some wireless adapters and some Raspberry Pi boards don't support 5 GHz networks; check the documentation for your wireless module to ensure compatibility with your preferred network. NOTE: Previous versions of Raspberry Pi OS made use of a `wpa_supplicant.conf` file, which could be placed into the boot folder to configure wireless network settings. This functionality isn't available from Raspberry Pi OS _Bookworm_ onwards. === Remote access With no keyboard or monitor, you need a way to xref:remote-access.adoc[remotely control] your headless Raspberry Pi. On first boot, the only option is SSH. To enable SSH on a fresh installation of Raspberry Pi OS, choose one of the following methods: * Enable SSH in the **Customisation > Remote Access** tab in Raspberry Pi Imager, choose the authentication mechanism, and provide a username and password or public key. * Create a file named `ssh` at the root of the first partition of the SD card (labelled `bootfs`), then configure a user manually with `userconf.txt` using the instructions in the following section. For more information, see xref:remote-access.adoc#ssh[set up an SSH server]. Once you've connected over SSH, you can use `raspi-config` to xref:remote-access.adoc#vnc[enable VNC] if you'd prefer a graphical desktop environment. [[configuring-a-user]] ==== Configure a user manually At the root of the first partition of your SD card (the filesystem labelled `bootfs`), create a file named `userconf.txt`. This file should contain a single line of text, consisting of `:`: your desired username, followed immediately by a colon, followed immediately by an *encrypted* representation of the password you want to use. NOTE: `` must only contain lowercase letters, digits, and hyphens, and must start with a letter. It may not be longer than 31 characters. To generate the encrypted password, use https://www.openssl.org[OpenSSL] on another computer. Open a terminal and enter the following: [source,console] ---- $ openssl passwd -6 ---- When prompted, enter your password and verify it. This command will then output an encrypted version of the supplied password. --- # Source: host-wireless-network.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Host a wireless network from your Raspberry Pi Your Raspberry Pi can host its own wireless network using a wireless module. If you connect your Raspberry Pi to the internet via the Ethernet port (or a second wireless module), other devices connected to the wireless network can access the internet through your Raspberry Pi. Consider a wired network that uses the `10.x.x.x` IP block. You can connect your Raspberry Pi to that network and serve wireless clients on a separate network that uses another IP block, such as `192.168.x.x`. In the diagram below, note that the laptop exists in an IP block separate from the router and wired clients: image::images/host-a-network.png[] With this network configuration, wireless clients can all communicate with each other through the Raspberry Pi router. However, clients on the wireless network cannot directly interact with clients on the wired network other than the Raspberry Pi; wireless clients exist in a private network separate from the network that serves wired clients. NOTE: The Raspberry Pi 5, 4, 3, Zero W, and Zero 2 W can host a wireless network using the built-in wireless module. Raspberry Pi models that lack a built-in module support this functionality using a separate wireless dongle. === Enable hotspot To create a hosted wireless network on the command line, run the following command, replacing the `` and `` placeholders with your own values: [source,console] ---- $ sudo nmcli device wifi hotspot ssid password ---- Use another wireless client, such as a laptop or smartphone, to connect to the network. Look for a network with a SSID matching ``. Enter your network password, and you should connect successfully to the network. If your Raspberry Pi has internet access via an Ethernet connection or a second wireless adapter, you should be able to access the internet. === Disable hotspot To disable the hotspot network and resume use of your Pi as a wireless client, run the following command: [source,console] ---- $ sudo nmcli device disconnect wlan0 ---- After disabling the network, run the following command to reconnect to another Wi-Fi network: [source,console] ---- $ sudo nmcli device up wlan0 ---- TIP: For more information about connecting to wireless networks, see xref:configuration.adoc#networking[Configure networking]. === Use your Raspberry Pi as a network bridge By default, the wireless network hosted from your Raspberry Pi exists separately from the parent network connected via Ethernet. In this arrangement, devices connected to the parent network cannot directly communicate with devices connected to the wireless network hosted from your Raspberry Pi. If you want connected wireless devices to be able to communicate with devices on the parent network, you can configure your Raspberry Pi as a https://en.wikipedia.org/wiki/Network_bridge[network bridge]. With a network bridge in place, each device connected to the Pi-hosted wireless network is assigned an IP address in the parent network. In the diagram below, the laptop exists in the same IP block as the router and wired clients: image::images/bridge-network.png[] The following steps describe how to set up a network bridge on your Raspberry Pi to enable communication between wireless clients and the parent network. First, create a network bridge interface: [source,console] ---- $ sudo nmcli connection add type bridge con-name 'Bridge' ifname bridge0 ---- Next, add your device's Ethernet connection to the parent network to the bridge: [source,console] ---- $ sudo nmcli connection add type ethernet slave-type bridge \ con-name 'Ethernet' ifname eth0 master bridge0 ---- Finally, add your wireless hotspot connection to the bridge. You can either add an existing hotspot interface or create a new one: * If you have already created a wireless hotspot connection using the instructions above, add the existing interface to the bridge with the following command: + [source,console] ---- $ sudo nmcli connection modify 'Hotspot' master bridge0 ---- * If you have not yet created a wireless hotspot connection, create a new interface and add it to the bridge with a single command, replacing the `` and `` placeholders with a network name and password of your choice, respectively: + [source,console?prompt=$] ---- $ sudo nmcli connection add con-name 'Hotspot' \ ifname wlan0 type wifi slave-type bridge master bridge0 \ wifi.mode ap wifi.ssid wifi-sec.key-mgmt wpa-psk \ wifi-sec.proto rsn wifi-sec.pairwise ccmp \ wifi-sec.psk ---- Now that you've configured your bridge, it's time to activate it. Run the following command to activate the bridge: [source,console] ---- $ sudo nmcli connection up Bridge ---- And run the following command to start hosting your wireless network: [source,console] ---- $ sudo nmcli connection up Hotspot ---- You can use the `nmcli device` command to verify that the bridge, Ethernet interface, and wireless hotspot interface are all active. TIP: Use a tool such as https://github.com/royhills/arp-scan[arp-scan] to check if devices on the parent network are accessible once connected to the hotspot. --- # Source: kernel-command-line-config.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Kernel command line (`cmdline.txt`) The Linux kernel accepts a collection of command line parameters during boot. On the Raspberry Pi, this command line is defined in a file in the boot partition, called `cmdline.txt`. You can edit this text file with any text editor. [source,console] ---- $ sudo nano /boot/firmware/cmdline.txt ---- IMPORTANT: Put all parameters in `cmdline.txt` on the same line. Do _not_ use newlines. To view the command line passed to the kernel at boot time, run the following command: [source,console] ---- $ cat /proc/cmdline ---- Because Raspberry Pi firmware makes changes to the command line before launching the kernel, the output of this command will not exactly match the contents of `cmdline.txt`. === Command line options There are many kernel command line parameters, some of which are defined by the kernel itself. Others are defined by code that the kernel may be using, such as the Plymouth splash screen system. ==== Standard entries `console`:: defines the serial console. There are usually two entries: * `console=serial0,115200` * `console=tty1` `root`:: defines the location of the root filesystem. e.g. `root=/dev/mmcblk0p2` means multimedia card block 0 partition 2. `rootfstype`:: defines what type of filesystem the rootfs uses, e.g. `rootfstype=ext4`. `quiet`:: sets the default kernel log level to `KERN_WARNING`, which suppresses all but very serious log messages during boot. ==== Set the KMS display mode The legacy firmware and FKMS display modes used in earlier versions of Raspberry Pi OS are no longer supported. Instead, recent OS versions use KMS (Kernel Mode Setting). If no `video` entry is present in `cmdline.txt`, Raspberry Pi OS uses the https://en.wikipedia.org/wiki/Extended_Display_Identification_Data[EDID] of the HDMI-connected monitor to automatically pick the best resolution supported by your display based on information in the Linux kernel. In Raspberry Pi OS Lite or console mode, you must customise the `video` entry to control resolution and rotation. [source,bash] ---- video=HDMI-A-1:1920x1080M@60 ---- In addition, it is possible to add rotation and reflect parameters as documented in the standard https://github.com/raspberrypi/linux/blob/rpi-6.1.y/Documentation/fb/modedb.rst[Linux framebuffer documentation]. The following example defines a display named `HDMI-A-1` at a resolution of 1080p, a refresh rate of 60Hz, 90 degrees of rotation, and a reflection over the X axis: [source,bash] ---- video=HDMI-A-1:1920x1080M@60,rotate=90,reflect_x ---- You must specify the resolution explicitly when specifying rotation and reflection parameters. Possible options for the display type - the first part of the `video=` entry - include: [cols="1m,3"] |=== | Video Option | Display | HDMI-A-1 | HDMI 1 (HDMI 0 on silkscreen of Raspberry Pi 4B, HDMI on single HDMI boards) | HDMI-A-2 | HDMI 2 (HDMI 1 on silkscreen of Raspberry Pi 4B) | DSI-1 | DSI or DPI | Composite-1 | Composite |=== ==== Other entries This section contains some of the other entries you can use in the kernel command line. This list is not exhaustive. `splash`:: tells the boot to use a splash screen via the Plymouth module. `plymouth.ignore-serial-consoles`:: normally if the Plymouth module is enabled it will prevent boot messages from appearing on any serial console which may be present. This flag tells Plymouth to ignore all serial consoles, making boot messages visible again, as they would be if Plymouth was not running. `dwc_otg.lpm_enable=0`:: turns off Link Power Management (LPM) in the `dwc_otg` driver, which drives the USB controller built into the processor used on Raspberry Pi computers. On Raspberry Pi 4, this controller is disabled by default, and is only connected to the USB type C power input connector. The USB-A ports on Raspberry Pi 4 are driven by a separate USB controller which is not affected by this setting. `dwc_otg.speed`:: sets the speed of the USB controller built into the processor on Raspberry Pi computers. `dwc_otg.speed=1` will set it to full speed (USB 1.0), which is slower than high speed (USB 2.0). This option should not be set except during troubleshooting of problems with USB devices. `smsc95xx.turbo_mode`:: enables/disables the wired networking driver turbo mode. `smsc95xx.turbo_mode=N` turns turbo mode off. `usbhid.mousepoll`:: specifies the mouse polling interval. If you have problems with a slow or erratic wireless mouse, setting this to 0 with `usbhid.mousepoll=0` might help. `drm.edid_firmware=HDMI-A-1:edid/your_edid.bin`:: Override your monitor's built-in EDID with the contents of `/usr/lib/firmware/edid/your_edid.bin`. --- # Source: led_blink_warnings.adoc *Note: This file could not be automatically converted from AsciiDoc.* == LED warning flash codes If a Raspberry Pi fails to boot for some reason, or has to shut down, in many cases an LED will flash a specific number of times to indicate what happened. The LED will blink for a number of long flashes (0 or more), then produce short flashes, to indicate the exact status. In most cases, the pattern will repeat after a two-second gap. [cols="^,^,"] |=== | Long flashes | Short flashes | Status | 0 | 3 | Generic failure to boot | 0 | 4 | start*.elf not found | 0 | 7 | Kernel image not found | 0 | 8 | SDRAM failure | 0 | 9 | Insufficient SDRAM | 0 | 10 | In HALT state | 1 | 2 | SD card overcurrent detected | 2 | 1 | Partition not FAT | 2 | 2 | Failed to read from partition | 2 | 3 | Extended partition not FAT | 2 | 4 | File signature/hash mismatch - Pi 4 and Pi 5 | 3 | 1 | SPI EEPROM error - Pi 4 and Pi 5 | 3 | 2 | SPI EEPROM is write protected - Pi 4 and Pi 5 | 3 | 3 | I2C error - Pi 4 and Pi 5 | 3 | 4 | Secure-boot configuration is not valid | 4 | 3 | RP1 not found | 4 | 4 | Unsupported board type | 4 | 5 | Fatal firmware error | 4 | 6 | Power failure type A | 4 | 7 | Power failure type B |=== --- # Source: localisation.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Localise your Raspberry Pi You can configure the UI language, keyboard layout, and time zone of Raspberry Pi OS with the xref:configuration.adoc#raspi-config[`raspi-config`] tool. --- # Source: pin-configuration.adoc *Note: This file could not be automatically converted from AsciiDoc.* == Change the default pin configuration NOTE: Custom default pin configurations via user-provided Device Tree blobs has been deprecated. === Device pins during boot sequence During the bootup sequence, the GPIO pins go through various actions. * Power-on - pins default to inputs with default pulls, which are described in the https://datasheets.raspberrypi.com/bcm2835/bcm2835-peripherals.pdf[datasheet] * Setting by the bootrom * Setting by `bootcode.bin` * Setting by `dt-blob.bin` (this page) * Setting by the xref:config_txt.adoc#gpio-control[GPIO command] in `config.txt` * Additional firmware pins (e.g. UARTS) * Kernel/Device Tree On a soft reset, the same procedure applies, except for default pulls, which are only applied on a power-on reset. It may take a few seconds to run through the process. During this time, the GPIO pins may not be in the state expected by attached peripherals (as defined in `dt-blob.bin` or `config.txt`). Since different GPIO pins have different default pulls, you should do *one of the following* for your peripheral: * Choose a GPIO pin that defaults to pulls as required by the peripheral on reset * Delay the peripheral's startup until the actions are completed * Add an appropriate pull-up/pull-down resistor === Provide a custom Device Tree blob In order to compile a Device Tree source (`.dts`) file into a Device Tree blob (`.dtb`) file, the Device Tree compiler must be installed by running `sudo apt install device-tree-compiler`. The `dtc` command can then be used as follows: [source,console] ---- $ sudo dtc -I dts -O dtb -o /boot/firmware/dt-blob.bin dt-blob.dts ---- Similarly, a `.dtb` file can be converted back to a `.dts` file, if required. [source,console] ---- $ dtc -I dtb -O dts -o dt-blob.dts /boot/firmware/dt-blob.bin ---- === Sections of the `dt-blob` The `dt-blob.bin` is used to configure the binary blob (VideoCore) at boot time. It is not currently used by the Linux kernel. The dt-blob can configure all versions of the Raspberry Pi, including the Compute Module, to use the alternative settings. The following sections are valid in the dt-blob: ==== `videocore` This section contains all of the VideoCore blob information. All subsequent sections must be enclosed within this section. ==== `pins_*` There are a number of separate `pins_*` sections, based on particular Raspberry Pi models, namely: * `pins_rev1`: Rev1 pin setup. There are some differences because of the moved I2C pins. * `pins_rev2`: Rev2 pin setup. This includes the additional codec pins on P5. * `pins_bplus1`: Raspberry Pi 1 Model B+ rev 1.1, including the full 40pin connector. * `pins_bplus2`: Raspberry Pi 1 Model B+ rev 1.2, swapping the low-power and lan-run pins. * `pins_aplus`: Raspberry Pi 1 Model A+, lacking Ethernet. * `pins_2b1`: Raspberry Pi 2 Model B rev 1.0; controls the SMPS via I2C0. * `pins_2b2`: Raspberry Pi 2 Model B rev 1.1; controls the SMPS via software I2C on 42 and 43. * `pins_3b1`: Raspberry Pi 3 Model B rev 1.0 * `pins_3b2`: Raspberry Pi 3 Model B rev 1.2 * `pins_3bplus`: Raspberry Pi 3 Model B+ * `pins_3aplus`: Raspberry Pi 3 Model A+ * `pins_pi0`: Raspberry Pi Zero * `pins_pi0w`: Raspberry Pi Zero W * `pins_pi02w`: Raspberry Pi Zero 2 W * `pins_cm`: Raspberry Pi Compute Module 1. The default for this is the default for the chip, so it is a useful source of information about default pull-ups/pull-downs on the chip. * `pins_cm3`: Raspberry Pi Compute Module 3 * `pins_cm3plus`: Raspberry Pi Compute Module 3+ * `pins_cm4s`: Raspberry Pi Compute Module 4S * `pins_cm4`: Raspberry Pi Compute Module 4 Each `pins_*` section can contain `pin_config` and `pin_defines` sections. ==== `pin_config` The `pin_config` section is used to configure the individual pins. Each item in this section must be a named pin section, such as `pin@p32`, meaning GPIO32. There is a special section `pin@default`, which contains the default settings for anything not specifically named in the pin_config section. ==== `pin@pinname` This section can contain any combination of the following items: * `polarity` ** `active_high` ** `active_low` * `termination` ** `pull_up` ** `pull_down` ** `no_pulling` * `startup_state` ** `active` ** `inactive` * `function` ** `input` ** `output` ** `sdcard` ** `i2c0` ** `i2c1` ** `spi` ** `spi1` ** `spi2` ** `smi` ** `dpi` ** `pcm` ** `pwm` ** `uart0` ** `uart1` ** `gp_clk` ** `emmc` ** `arm_jtag` * `drive_strength_mA` + The drive strength is used to set a strength for the pins. Please note that you can only specify a single drive strength for the bank. <8> and <16> are valid values. ==== `pin_defines` This section is used to set specific VideoCore functionality to particular pins. This enables the user to move the camera power enable pin to somewhere different, or move the HDMI hotplug position: these are things that Linux does not control. Please refer to the example DTS file below. === Clock configuration It is possible to change the configuration of the clocks through this interface, although it can be difficult to predict the results! The configuration of the clocking system is very complex. There are five separate PLLs, and each one has its own fixed (or variable, in the case of PLLC) VCO frequency. Each VCO then has a number of different channels which can be set up with a different division of the VCO frequency. Each of the clock destinations can be configured to come from one of the clock channels, although there is a restricted mapping of source to destination, so not all channels can be routed to all clock destinations. Here are a couple of example configurations that you can use to alter specific clocks. We will add to this resource when requests for clock configurations are made. [source,kotlin] ---- clock_routing { vco@PLLA { freq = <1966080000>; }; chan@APER { div = <4>; }; clock@GPCLK0 { pll = "PLLA"; chan = "APER"; }; }; clock_setup { clock@PWM { freq = <2400000>; }; clock@GPCLK0 { freq = <12288000>; }; clock@GPCLK1 { freq = <25000000>; }; }; ---- The above will set the PLLA to a source VCO running at 1.96608 GHz (the limits for this VCO are 600 MHz - 2.4 GHz), change the APER channel to /4, and configure GPCLK0 to be sourced from PLLA through APER. This is used to give an audio codec the 12288000Hz it needs to produce the 48000 range of frequencies. === Sample Device Tree source file The firmware repository contains a https://github.com/raspberrypi/firmware/blob/master/extra/dt-blob.dts[master Raspberry Pi blob] from which others are usually derived. --- # Source: raspi-config.adoc *Note: This file could not be automatically converted from AsciiDoc.* [[raspi-config]] == `raspi-config` `raspi-config` helps you configure your Raspberry Pi. Changes to `raspi-config` will modify xref:config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] and other configuration files. === Getting started To open the configuration tool from the desktop GUI, go to **Preferences** > **Control Centre**. NOTE: In previous versions of Raspberry Pi OS, the Control Centre application was called Raspberry Pi Configuration. Alternatively, run the following command to access the configuration tool via the terminal: [source,console] ---- $ sudo raspi-config ---- TIP: Some advanced configuration is available in the `raspi-config` CLI, but not the Control Centre GUI. To navigate the configuration tool from the terminal: * Use the up and down arrow keys to scroll through the settings list. * Access the `