Skip to content

This project utilized AWS Rekognition to analyze images or videos using pre-trained AWS machine-learning models.

Notifications You must be signed in to change notification settings

blythekelly/AWS_rekognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AWS Rekognition

For this project, I utilized AWS Rekognition to analyze images or videos using pre-trained AWS machine-learning models. There were six different features of Rekognition, which included text detection, labeling objects, and celebrity facial recognition. The labeling service analyzed an input image and provided the names of any objects, colors, or context detected in the image provided. I focused primarily on the label portion of Rekognition in my project, and I began by reading through the Amazon Rekognition developer guide from the AWS website. This Rekognition tutorial provided some example code in Python that could be used to label portions of an image stored in an Amazon Simple Storage Service (S3) bucket. My overall goal was to create a web interface where a user could upload an image, which would be sent to my S3 instance, and output 10 detected labels for the image. I also paired my S3 bucket with the session and used Rekognition on an image that was already stored in my S3 bucket. To create the Flask app for the front end, I programmed two different website routes. One displayed the home page, and the other performed the back-end requirements for uploading an image to my S3 bucket. Within the homepage, I included an HTML form with an upload file button. To retrieve this file, I learned how to extract the file and save it from a Python Basics tutorial. I also avoided any possible file type complications by limiting the type of files that a user could upload. For the S3 uploading portion of the form, I paired the Flask app with my S3 bucket by creating a boto3 client that represented S3. This included a built-in method for uploading files. Once the file was uploaded to my S3 bucket, I returned a method call to a function I wrote called detect_labels_html. This was a variation of the function I wrote to detect labels originally. I learned that I needed to format what I was returning in the function differently than I learned in the AWS tutorial, which printed the detected labels to the console. Instead, I created a string called html, which I concatenated each phrase or label to, and added HTML line breaks for readability. I also reduced the amount of information I was returning to the user, as I only included the detected labels, aliases, confidence percentages, and categories. Once this html variable contained all ten labels, I returned it, so the information on detected labels would be displayed in the same browser window that the user uploaded their image in. In conclusion, within my Flask app, I developed an HTML homepage template and back-end code to perform the Rekognition task. Once the user pushed the submit button, the Rekognition function was returned, which printed out ten labels that were determined for the submitted image. The confidence level was also displayed along with aliases and categories. Overall, I found that Rekognition was a powerful and straightforward service through AWS, and I would look into working with more Rekognition services in future projects.
Screen Shot 2023-05-10 at 10 15 39 AM Screen Shot 2023-05-10 at 10 17 12 AM

About

This project utilized AWS Rekognition to analyze images or videos using pre-trained AWS machine-learning models.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages