Skip to content

mrshakil015/Using-Multiple-CNN-Model-Develop-Bangla-Number-Plate-Recognition-System

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Code Explanatation Number Plate Recognition Using CNN

Import Libraries:

import os
import cv2
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Flatten, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.vgg19 import VGG19
from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2
from tensorflow.keras.optimizers import Adam
import xml.etree.ElementTree as ET

Library Explanatation:

  • import os: This library provides a way to interact with the operating system and access file paths and directories.
  • import cv2: This is the OpenCV library used for image processing and computer vision tasks.
  • import numpy as np: This imports the NumPy library for scientific computing and arrays.
  • import tensorflow as tf: This imports the TensorFlow library for machine learning and deep learning tasks.
  • from tensorflow.keras.layers import Flatten, Dense: This imports the Flatten and Dense layers from Keras, which are used to build neural networks.
  • from tensorflow.keras.models import Model: This imports the Model class from Keras, which is used to create a deep learning model.
  • from tensorflow.keras.applications.vgg16 import VGG16: This imports the pre-trained VGG16 model from Keras, which is a convolutional neural network commonly used for image classification tasks.
  • from tensorflow.keras.applications.vgg19 import VGG19: This imports the pre-trained VGG19 model from Keras, which is similar to VGG16 but has more layers.
  • from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2: This imports the pre-trained MobileNetV2 model from Keras, which is a lightweight convolutional neural network commonly used for mobile and embedded devices.
  • from tensorflow.keras.optimizers import Adam: This imports the Adam optimizer from Keras, which is an algorithm used to optimize the weights of a neural network during training.
  • import xml.etree.ElementTree as ET: This imports the ElementTree library for parsing XML files.

Define Input Shape & Batch Size:

input_shape = (224, 224, 3)
batch_size = 32

Code Explanatation:

  • input_shape: This is a tuple that specifies the dimensions of the input images.In this case, the input images will have a width and height of 224 pixels and three color channels (red, green, and blue).
  • batch_size: It is specifies the number of samples that will be fed into the model at once during training. In this case, the model will process 32 images at a time.

Define Base Model:

base_model = VGG16(input_shape=input_shape, include_top=False, weights='imagenet')

Code Explanatation:

This line of code creates a VGG16 model instance called base_model.

  • input_shape=input_shape: This parameter specifies the shape of the input data to the model.
  • weights='imagenet: TThis parameter specifies the pre-trained weights to use for the model.

Custom Layer to the Pre-trained Model:

x = base_model.output
x = Flatten()(x)
x = Dense(4, activation='linear')(x)
model = Model(inputs=base_model.input, outputs=x)

Code Explanatation:

This code adds custom layers to the pre-trained model.

  • x = base_model.output: This line sets x to the output of the pre-trained VGG16 model, which is the last layer before the fully connected layers.
  • x = Flatten()(x): This line adds a Flatten layer to the model.
  • x = Dense(4, activation='linear')(x): This line adds a Dense layer to the model with 4 units and a linear activation function.
  • model = Model(inputs=base_model.input, outputs=x): This line creates a new model. This creates a new model that combines the pre-trained VGG16 model with our custom fully connected layers to perform object detection.

Training configuration for the model:

optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)
model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])

Code Explanatation:

This code sets up the optimizer, loss function, and evaluation metric for the model, preparing it for training.

  • optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001): This line creates an instance of the Adam optimizer with a learning rate of 0.0001. The optimizer is used during training to adjust the weights of the model in order to minimize the loss function.
  • model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy']): This line loss='mse' specifies that we will use mean squared error as the loss function during training, which measures the difference between the predicted output and the true output. optimizer=optimizer specifies that we will use the Adam optimizer instance created in the previous line. metrics=['accuracy'] specifies that we will track the accuracy metric during training.

Save the best model:

save_vgg16_model = tf.keras.callbacks.ModelCheckpoint(
    "/content/drive/MyDrive/Colab Notebooks/Number-Plate-Recognition-Model/vgg16model.h5", 
    monitor='accuracy', 
    save_best_only=True, 
    verbose=1
)

Code Explanatation:

This code snippet creates a callback function using the tf.keras.callbacks.ModelCheckpoint class that saves the best-performing model during the training process.

  • "/content/drive/MyDrive/Colab Notebooks/Number-Plate-Recognition-Model/vgg16model.h5": This parameter specifies the path where the model weights will be saved.
  • monitor='accuracy: This tells the function to monitor the model's accuracy during training.
  • save_best_only=True: This parameter ensures that only the best model (based on the monitored metric) will be saved. If set to False, the function will save the model after every epoch.
  • verbose=1: This parameter sets the verbosity level of the output messages during training. A value of 1 means that progress updates will be printed to the console.

Directory Path:

training_directory = '/content/drive/MyDrive/Colab Notebooks/Number-Palte-Dataset/train'
validation_directory = '/content/drive/MyDrive/Colab Notebooks/Number-Palte-Dataset/valid'

Code Explanatation:

These are the directory paths for the training and validation datasets used in this project.

About

Using Multiple CNN Model Develop Bangla Number Plate Recognition System

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages