Author: Pedro Vinícius A. B. Venâncio1
This repository contains source code and models developed during my master's research on hybrid fire and smoke detection systems, along with baseline models for comparison.
The proposed hybrid systems are composed of two sequential stages:
- Spatial Detection — identifies and locates fire/smoke events using spatial patterns.
- Temporal Analysis — verifies whether a fire event is truly taking place based on temporal behavior of detected regions.
Baseline models are standard convolutional neural networks (CNNs) proposed in the literature for fire classification.
-
Clone the repository and place your input videos in the
examples/
folder. -
Build the Docker image:
docker build -t fire-detection .
- Run the container:
docker run -it --rm fire-detection /bin/bash
Select one of the detection methods below and follow its respective instructions.
The hybrid detection system combines two stages:
- Stage 1: A YOLOv5 network (
small
orlarge
) to identify candidate fire/smoke regions in each frame. - Stage 2: A temporal analysis method to confirm true fire events:
- AVT (Area Variation Technique): recommended for outdoor scenes.
- TPT (Temporal Persistence Technique): recommended for indoor scenes.
Detections are saved to runs/detect/exp/
.
If you want to use the hybrid system YOLOv5+AVT, run the following command inside the container:
python detect.py --source <video_file> --weights ./weights/<weights_file> --temporal tracker
where <video_file>
is the video in which you will detect fire and <weights_file>
is the file with the network weights (can be yolov5s.pt or yolov5l.pt). You can change the parameters of the area variation technique by specifying the additional flags --area-thresh
and window-size
.
If you want to use the hybrid system YOLOv5+TPT, run the following command inside the container:
python detect.py --source <video_file> --weights ./weights/<weights_file> --temporal persistence
where <video_file>
is the video in which you will detect fire and <weights_file>
is the file with the network weights (can be yolov5s.pt or yolov5l.pt). You can change the parameters of the persistence temporal technique by specifying the additional flags --persistence-thresh
and window-size
.
To run only the YOLOv5 network without temporal analysis, use:
python detect.py --source <video_file> --imgsz 640 --weights ./weights/<weights_file>
where <video_file>
is the video in which you will detect fire and <weights_file>
is the file with the network weights (can be yolov5s.pt or yolov5l.pt). You can change the parameters of the YOLOv5 network by specifying the additional flags --img-size
, --conf-thres
and --iou-thres
.
If you want to use a baseline model, run the following command inside the container:
python baseline.py --video <video_file> --model <model_name>
where <video_file>
is the video in which you will detect fire and <model_name>
is the name of the model to be used (can be 'firenet'
or 'mobilenet'
).
Run the script to fetch all model weights:
./scripts/download_models.sh
Or download manually:
Please cite the following paper if you use our proposed hybrid systems for fire and smoke detection:
-
Pedro Vinícius Almeida Borges de Venâncio, Roger Júnio Campos, Tamires Martins Rezende, Adriano Chaves Lisboa, Adriano Vilela Barbosa: A hybrid method for fire detection based on spatial and temporal patterns. In: Neural Computing and Applications, 2023.
If you use our YOLOv4 models for fire and smoke detection, please cite the following paper:
-
Pedro Vinícius Almeida Borges de Venâncio, Adriano Chaves Lisboa, Adriano Vilela Barbosa: An automatic fire detection system based on deep convolutional neural networks for low-power, resource-constrained devices. In: Neural Computing and Applications, 2022.
- YOLO models: YOLOv4 (Darknet), YOLOv5 (PyTorch)
- Object tracking: OpenCV Tracker (PyImageSearch).
- Baseline models: Inferno CNN, and FireNet.
- Datasets: D-Fire dataset, FireNet dataset, and Foggia's dataset.