Supported Hardware
Frigate supports multiple different detectors that work on different types of hardware:
Most Hardware
- Coral EdgeTPU: The Google Coral EdgeTPU is available in USB, Mini PCIe, and m.2 formats allowing for a wide range of compatibility with devices.
- Hailo: The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- Community Supported MemryX: The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
- Community Supported DeGirum: Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on their website.
AMD
- ROCm: ROCm can run on AMD Discrete GPUs to provide efficient object detection.
- ONNX: ROCm will automatically be detected and used as a detector in the
-rocmFrigate image when a supported ONNX model is configured.
Apple Silicon
- Apple Silicon: Apple Silicon can run on M1 and newer Apple Silicon devices.
Intel
- OpenVino: OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
- ONNX: OpenVINO will automatically be detected and used as a detector in the default Frigate image when a supported ONNX model is configured.
Nvidia GPU
- ONNX: Nvidia GPUs will automatically be detected and used as a detector in the
-tensorrtFrigate image when a supported ONNX model is configured.
Nvidia Jetson Community Supported
- TensortRT: TensorRT can run on Jetson devices, using one of many default models.
- ONNX: TensorRT will automatically be detected and used as a detector in the
-tensorrt-jp6Frigate image when a supported ONNX model is configured.
Rockchip Community Supported
- RKNN: RKNN models can run on Rockchip devices with included NPUs.
Synaptics Community Supported
- Synaptics: synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
AXERA Community Supported
- AXEngine: axmodels can run on AXERA AI acceleration.
For Testing
- CPU Detector (not recommended for actual use: Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
Multiple detectors can not be mixed for object detection (ex: OpenVINO and Coral EdgeTPU can not be used for object detection at the same time).
This does not affect using hardware for accelerating other tasks such as semantic search
Officially Supported Detectors
Frigate provides a number of builtin detector types. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Edge TPU Detectorâ
The Edge TPU detector type runs TensorFlow Lite models utilizing the Google Coral delegate for hardware acceleration. To configure an Edge TPU detector, set the "type" attribute to "edgetpu".
The Edge TPU device can be specified using the "device" attribute according to the Documentation for the TensorFlow Lite Python API. If not set, the delegate will use the first device it finds.
See common Edge TPU troubleshooting steps if the Edge TPU is not detected.
Single USB Coralâ
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select EdgeTPU from the detector type dropdown and click Add, then set device to usb.
detectors:
coral:
type: edgetpu
device: usb
Multiple USB Coralsâ
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select EdgeTPU from the detector type dropdown and click Add to add multiple detectors, specifying usb:0 and usb:1 as the device for each.
detectors:
coral1:
type: edgetpu
device: usb:0
coral2:
type: edgetpu
device: usb:1
Native Coral (Dev Board)â
warning: may have compatibility issues after v0.9.x
- Frigate UI
- YAML
Navigate to Settings