Network Requirements
Frigate is designed to run locally and does not require a persistent internet connection for core functionality. However, certain features need internet access for initial setup or ongoing operation. This page describes what connects to the internet, when, and how to control it.
How Frigate Uses the Internetâ
Frigate's internet usage falls into three categories:
- One-time model downloads â ML models are downloaded the first time a feature is enabled, then cached locally. No internet is needed on subsequent startups.
- Optional cloud services â Features like Frigate+ and Generative AI connect to external APIs only when explicitly configured.
- Build-time dependencies â Components bundled into the Docker image during the build process. These require no internet at runtime.
After initial setup, Frigate can run fully offline as long as all required models have been downloaded and no cloud-dependent features are enabled.
One-Time Model Downloadsâ
The following models are downloaded automatically the first time their associated feature is enabled. Once cached in /config/model_cache/, they do not require internet again.
| Feature | Models Downloaded | Source |
|---|---|---|
| Semantic search | Jina CLIP v1 or v2 (ONNX) + tokenizer | HuggingFace |
| Face recognition | FaceNet, ArcFace, face detection model | GitHub |
| License plate recognition | PaddleOCR (detection, classification, recognition) + YOLOv9 plate detector | GitHub |
| Bird classification | MobileNetV2 bird model + label map | GitHub |
| Custom classification (training) | MobileNetV2 ImageNet base weights (via Keras) | Google storage |
| Audio transcription | Whisper or Sherpa-ONNX streaming model | HuggingFace / OpenAI |
Hardware-Specific Detector Modelsâ
If you are using one of the following hardware detectors and have not provided your own model file, a default model will be downloaded on first startup:
| Detector | Model Downloaded | Source |
|---|---|---|
| Rockchip RKNN | RKNN detection model | GitHub |
| Hailo 8 / 8L | YOLOv6n (.hef) | Hailo Model Zoo (AWS S3) |
| AXERA AXEngine | Detection model | HuggingFace |
The default CPU, EdgeTPU, and OpenVINO object detection models are bundled into the Docker image and do not require any download at runtime.
Preventing Model Downloadsâ
If you have already downloaded all required models and want to prevent Frigate from attempting any outbound connections to HuggingFace or the Transformers library, set the following environment variables on your Frigate container:
environment:
HF_HUB_OFFLINE: "1"
TRANSFORMERS_OFFLINE: "1"
Setting these variables without having the correct model files already cached in /config/model_cache/ will cause failures. Only use these after a successful initial setup with internet access.
Mirror Supportâ
If your Frigate instance has restricted internet access, you can point model downloads at internal mirrors using environment variables:
| Environment Variable | Default | Used By |
|---|---|---|
HF_ENDPOINT | https://huggingface.co | Semantic search, Sherpa-ONNX, AXEngine models |
GITHUB_ENDPOINT | https://github.com | Face recognition, LPR, RKNN models |
GITHUB_RAW_ENDPOINT | https://raw.githubusercontent.com | Bird classification |
TF_KERAS_MOBILENET_V2_WEIGHTS_URL | Google storage (Keras default) | Custom classification training |
Optional Cloud Servicesâ
These features connect to external services during normal operation and require internet whenever they are active.
Frigate+â
When a Frigate+ API key is configured, Frigate communicates with https://api.frigate.video to download models, upload snapshots for training, submit annotations, and report false positives. Remove the API key to disable all Frigate+ network activity.
See Frigate+ for details.
Generative AIâ
When a Generative AI provider is configured, Frigate sends images and prompts to the configured provider for event descriptions, chat, and camera monitoring. Available providers:
| Provider | Internet Required |
|---|---|
| OpenAI | Yes â connects to OpenAI API (or custom base URL) |
| Google Gemini | Yes â connects to Google Generative AI API |
| Azure OpenAI | Yes â connects to your Azure endpoint |
| Ollama | Depends â typically local (localhost:11434), but can be remote |
| llama.cpp | No â runs entirely locally |
Disable Generative AI by removing the genai configuration from your cameras. See Generative AI for details.
Version Checkâ
Frigate checks GitHub for the latest release version on startup by querying https://api.github.com. This can be disabled:
telemetry:
version_check: false
Push Notificationsâ
When notifications are enabled and users have registered for push notifications in the web UI, Frigate sends push messages through the browser vendor's push service (e.g., Google FCM, Mozilla autopush). This requires internet access from the Frigate server to these push endpoints.
MQTTâ
If an MQTT broker is configured, Frigate maintains a connection to the broker's host and port. This is typically a local network connection, but will require internet if you use a cloud-hosted MQTT broker.
DeepStack / CodeProject.AIâ
When using the DeepStack detector plugin, Frigate sends images to the configured API endpoint for inference. This is typically local but depends on where the service is hosted.
WebRTC (STUN)â
For WebRTC live streaming, Frigate uses STUN for NAT traversal:
- go2rtc defaults to a local STUN listener (
stun:8555) â no internet required. - The web UI's WebRTC player includes a fallback to Google's public STUN server (
stun:stun.l.google.com:19302), which requires internet.
Home Assistant Supervisorâ
When running as a Home Assistant add-on, the go2rtc startup script queries the local Supervisor API (http://supervisor/) to discover the host IP address and WebRTC port. This is a local network call to the Home Assistant host, not an internet connection.
What Does NOT Require Internetâ
- Object detection â CPU, EdgeTPU, OpenVINO, and other bundled detector models are included in the Docker image.
- Recording and playback â All video is stored and served locally.
- Live streaming â Camera streams are pulled over your local network. MSE and HLS streaming work without any external connections.
- The web interface â Fully self-contained with no external fonts, scripts, analytics, or CDN dependencies. All translations are bundled locally.
- Custom classification inference â After training, custom models run entirely locally.
- Audio detection â The YAMNet audio classification model is bundled in the Docker image.
Running Frigate Offlineâ
To run Frigate in an air-gapped or offline environment:
- Pre-download models â Start Frigate with internet access once with all desired features enabled. Models will be cached in
/config/model_cache/. - Disable version check â Set
telemetry.version_check: falsein your configuration. - Block outbound model requests â Set the
HF_HUB_OFFLINE=1andTRANSFORMERS_OFFLINE=1environment variables to prevent HuggingFace and Transformers from attempting any network requests. - Avoid cloud features â Do not configure Frigate+, Generative AI providers that require internet, or cloud MQTT brokers.
- Use local model mirrors â If limited internet is available, set the
HF_ENDPOINT,GITHUB_ENDPOINT, andGITHUB_RAW_ENDPOINTenvironment variables to point to local mirrors.
After these steps, Frigate will operate with no outbound internet connections.