Skip to content

aditya-524/ImageClass_TL

Repository files navigation

Image Classification with Transfer Learning

About The Project

This repository, focuses on exploring the effectiveness of transfer learning in image classification tasks. By leveraging pre-trained models, specifically VGG networks, the goal was to utilize pretrained ConvNet models using transfer learning, which significantly improved the classification accuracy with minimal computational resources compared to training models from scratch. Though the accuracy achieved was 53.09%

In my study, I used the VGG model with pretrained weights on IMAGENET and used a modified version of it to train on CINIC-10. I experimented with various configurations of the VGG model to classify images across multiple categories. The findings, detailed in my research paper and code.

Below are some of the results from the paper.

Training curves

Overview


Transfer Learning Efficiency: Utilizing VGG pre-trained models allows for considerable improvements in accuracy.
Model Comparison: Analysis of different VGG configurations to determine the most effective setup for our classification task.
Performance Metrics: Evaluation of models based on accuracy, precision, and recall metrics.

General Flowchart for the project

Implementing

The project is implemented using a jupyter notebook, so its fairly straightforward to download it directly to retest. Notebook I used 4 versions of the models

  1. VGG16 model with untrainable Convolutional layers and replacing the classifier with a flatten layer, dense layer with Relu activation and output layer with Softmax activation function

  1. The configuration is the same as model 1, in addition to that we introduce data augmentation for training the model

  1. The configuration is the same as model 1, in addition to that we introduce data augmentation for training and regularization methods such as Batch Normalization and Dropout Layer

  1. VGG16 model with fully trainable weights and replacing the classifier with a flatten layer, dense layer with Relu activation and output layer with Softmax activation function.

Dependencies

The code utilizes the python packages as such

  • matplotlib,
  • VisualKeras,
  • pickle,
  • numpy,
  • scikit-learn,
  • Tensorflow.

License

Distributed under the MIT License. See LICENSE for more information.

Authors

Project Link:Project
Kaggle Notebook:
Collab Notebook:

Thank you

Shows an illustrated sun in light mode and a moon with stars in dark mode.

Note

I did not conduct hyperparameter tuning of the model, which is an essential step for achieving good accuracy.

Releases

No releases published

Packages

 
 
 

Languages