Skip to content

Sign-Language-Recognition-for-differently-abled-people (Minor Project)

Notifications You must be signed in to change notification settings

Siddhipatade/Sign-Language-Recognition

Repository files navigation

Sign-Language-Recognition-for-differently-abled-people

Abstract

Sign language is a visual language used by people with hearing disabilities to communicate with each other and with hearing people. However, understanding sign language can be a challenge for those who are not familiar with it. Machine learning and deep neural networks can be used to detect sign language gestures and translate them into text or speech to help bridge this communication gap.

In this project, a machine learning model and deep neural network are developed to detect sign language gestures. The model is trained on a large dataset of sign language gestures and uses image processing techniques to detect and track the movements of the hands and fingers. The deep neural network is used to classify the detected gestures into corresponding sign language words or phrases.

The main objective of our project is to develop a user-friendly and accessible system that can facilitate communication and promote inclusivity for differently-abled individuals. By Enabling smooth communication between those who use sign language and those who do not, our system aims to improve the quality of life of differently abled individuals and promote a more inclusive and accessible society.

image

Python Library used

1. Lastest pip -> pip install --upgrade pip

2. numpy -> pip install numpy

3. os-sys -> pip install os-sys

4. opencv -> pip install opencv-python

5. tensorFlow -> pip install tensorflow 

6. keras -> pip install keras

7. tkinter -> pip install tk

Authors