Skip to content

Pneumonia Detection - Healthcare Imaging Application built with BentoML and fine-tuned Vision Transformer (ViT) model

Notifications You must be signed in to change notification settings

bentoml/Pneumonia-Detection-Demo

Repository files navigation

Pneumonia Detection with BentoML


Healthcare AI 🫁🔍- Made Easy with BentoML
Powered by BentoML 🍱 + HuggingFace 🤗

📖 Introduction 📖

In this project, we showcase the seamless integration of an image detection model into a service using BentoML. Leveraging the power of the pretrained nickmuchi/vit-finetuned-chest-xray-pneumonia model from HuggingFace, users can submit their lung X-ray images for analysis. The model will then determine, with precision, whether the individual has pneumonia or not.

📝 Disclaimer: Please note that this project is not intended to replace professional medical advice. It is designed purely for demonstration and testing purposes. Always consult with a qualified healthcare professional for a proper diagnosis.

Normal Pneumonia
Normal Pneumonia

🏃‍♂️ Running the Service 🏃‍♂️

BentoML CLI

Clone the repository and install the dependencies:

git clone https://github.com/bentoml/Pneumonia-Detection-demo.git && cd Pneumonia-Detection-demo

pip install -r requirements/pypi.txt

To serve the model with BentoML:

bentoml serve

You can then open your browser at http://127.0.0.1:3000 and interact with the service through Swagger UI.

Containers

We provide two pre-built containers optimized for CPU and GPU usage, respectively.

To run the service, you'll need a container engine such as Docker, Podman, etc. Quickly test the service by running the appropriate container:

# cpu
docker run -p 3000:3000 ghcr.io/bentoml/pneumonia-detection-demo:cpu

# gpu
docker run --gpus all -p 3000:3000 ghcr.io/bentoml/pneumonia-detection-demo:gpu

🌐 Interacting with the Service 🌐

BentoML's default model serving method is through an HTTP server. In this section, we demonstrate various ways to interact with the service:

cURL

curl -X 'POST' \
  'http://localhost:3000/v1/classify' \
  -H 'accept: application/json' \
  -H 'Content-Type: image/mpo' \
  --data-binary '@path-to-image'

Replace path-to-image with the file path of the image you want to send to the service.

The response look like:

{"class_name":"NORMAL"}

Via BentoClient 🐍

To send requests in Python, one can use bentoml.client.Client to send requests to the service. Check out client.py for the example code.

Swagger UI

You can use Swagger UI to quickly explore the available endpoints of any BentoML service.

🚀 Deploying to Production 🚀

Effortlessly transition your project into a production-ready application using BentoCloud, the production-ready platform for managing and deploying machine learning models.

Start by creating a BentoCloud account. Once you've signed up, log in to your BentoCloud account using the command:

bentoml cloud login --api-token <your-api-token> --endpoint <bento-cloud-endpoint>

Note: Replace <your-api-token> and <bento-cloud-endpoint> with your specific API token and the BentoCloud endpoint respectively.

Next, build your BentoML service using the build command:

bentoml build

Then, push your freshly-built Bento service to BentoCloud using the push command:

bentoml push <name:version>

Lastly, deploy this application to BentoCloud with a single bentoml deployment create command following the deployment instructions.

BentoML offers a number of options for deploying and hosting online ML services into production, learn more at Deploying a Bento.

👥 Community 👥

BentoML has a thriving open source community where thousands of ML/AI practitioners are contributing to the project, helping other users and discussing the future of AI. 👉 Pop into our Slack community!

About

Pneumonia Detection - Healthcare Imaging Application built with BentoML and fine-tuned Vision Transformer (ViT) model

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages