This project is under development for recognizing Bangla License plates from recorded videos and also for real-time recognition. The detection is done using yolov4 algorithm. The recognition is done being developed using PyTesseract for now. Custom recognizion work is in progress.
# Tensorflow CPU
conda env create -f conda-cpu.yml
conda activate yolov4-cpu
# Tensorflow GPU
conda env create -f conda-gpu.yml
conda activate yolov4-gpu
# TensorFlow CPU
pip install -r requirements.txt
# TensorFlow GPU
pip install -r requirements-gpu.txt
USE the LICENSE PLATE TRAINED CUSTOM WEIGHTS from "The AI Guy": https://drive.google.com/file/d/1EUPtbtdF0bjRtNjGv436vDY28EN5DXDH/view?usp=sharing
Copy and paste your custom.weights file into the 'data' folder and copy and paste your custom .names into the 'data/classes/' folder.
To implement YOLOv4 using TensorFlow, first we convert the .weights into the corresponding TensorFlow model files and then run the model.
# Convert darknet weights to tensorflow
## yolov4
python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4
The following commands will allow you to run your custom yolov4 model. (video and webcam commands work as well)
# custom yolov4
python save_model.py --weights ./data/custom.weights --output ./checkpoints/custom-416 --input_size 416 --model yolov4
# Run custom yolov4 tensorflow model
python detect.py --weights ./checkpoints/custom-416 --size 416 --model yolov4 --images ./data/images/car.jpg
Firast, some preprocessing is done on the license plate in order to correctly extract the license plate number from the image. The function that is in charge of doing the preprocessing and text extraction is called recognize_plate and can be found in the file "core/utils.py".
Disclaimer: In order to run tesseract OCR you must first download the binary files and set them up on your local machine. Please do so before proceeding or commands will not run as expected!
Official Tesseract OCR Github Repo: tesseract-ocr/tessdoc
Great Article for How To Install Tesseract on Mac or Linux Machines: PyImageSearch Article
For Windows : Windows Install
The license plate recognition works wonders on images. All you need to do is add the --plate
flag on top of the command to run the custom YOLOv4 model.
Try it out on this image in the repository!
# Run License Plate Recognition
python detect.py --weights ./checkpoints/custom-416 --size 416 --model yolov4 --images ./data/images/car2.jpg --plate
The output from the above command should print any license plate numbers found to your command terminal as well as output and save the following image to the detections
folder.
You should be able to see the license plate number printed on the screen above the bounding box found by YOLOv4.
Running the license plate recognition straight on video at the same time that YOLOv4 object detections causes a few issues. Tesseract OCR is fairly expensive in terms of time complexity and slows down the processing of the video to a snail's pace. It can still be accomplished by adding the --plate
command line flag to any detect_video.py commands.
Running License Plate Recognition with detect_video.py is done with the following command.
python detect_video.py --weights ./checkpoints/custom-416 --size 416 --model yolov4 --video ./data/video/license_plate.mp4 --output ./detections/recognition.mp4 --plate
The recommended route I think is more efficient is using this command. Customize the rate at which detections are cropped within the code itself.
python detect_video.py --weights ./checkpoints/custom-416 --size 416 --model yolov4 --video ./data/video/license_plate.mp4 --output ./detections/recognition.mp4 --crop
save_model.py:
--weights: path to weights file
(default: './data/custom.weights')
--output: path to output
(default: './checkpoints/custom-416')
--[no]tiny: yolov4 or yolov4-tiny
(default: 'False')
--input_size: define input size of export model
(default: 416)
--framework: what framework to use (tf, trt, tflite)
(default: tf)
--model: yolov3 or yolov4
(default: yolov4)
detect.py:
--images: path to input images as a string with images separated by ","
(default: './data/images/car.jpg')
--output: path to output folder
(default: './detections/')
--[no]tiny: yolov4 or yolov4-tiny
(default: 'False')
--weights: path to weights file
(default: './checkpoints/custom-416')
--framework: what framework to use (tf, trt, tflite)
(default: tf)
--model: yolov3 or yolov4
(default: yolov4)
--size: resize images to
(default: 416)
--iou: iou threshold
(default: 0.45)
--score: confidence threshold
(default: 0.25)
--count: count objects within images
(default: False)
--dont_show: dont show image output
(default: False)
--info: print info on detections
(default: False)
--crop: crop detections and save as new images
(default: False)
detect_video.py:
--video: path to input video (use 0 for webcam)
(default: './data/video/license_plate.mp4')
--output: path to output video (remember to set right codec for given format. e.g. XVID for .avi)
(default: None)
--output_format: codec used in VideoWriter when saving video to file
(default: 'XVID)
--[no]tiny: yolov4 or yolov4-tiny
(default: 'false')
--weights: path to weights file
(default: './checkpoints/custom-416')
--framework: what framework to use (tf, trt, tflite)
(default: tf)
--model: yolov3 or yolov4
(default: yolov4)
--size: resize images to
(default: 416)
--iou: iou threshold
(default: 0.45)
--score: confidence threshold
(default: 0.25)
--count: count objects within video
(default: False)
--dont_show: dont show video output
(default: False)
--info: print info on detections
(default: False)
--crop: crop detections and save as new images
(default: False)
Huge shoutout goes to "The AI Guy" for creating the backbone of this repository: