Highlight affected lung regions in chest X-rays using Python, Django, and CNN in a web-based pneumonia detection app [closed]

Our team is building a web-based pneumonia detection system using Python, Django, and a CNN. The model classifies chest X-rays as pneumonia or normal, but we now want to highlight affected lung regions for healthcare workers.

Challenges:

Generating heatmaps/saliency maps (e.g., Grad-CAM).

Efficiently preprocessing images in Django.

Integrating predictions + highlights into the frontend.

Handling low-quality/noisy images.

Keeping the pipeline fast, maintainable, and user-friendly.

How can we structure the end-to-end pipeline to show interpretable visualizations alongside predictions? Any best practices, libraries, or design patterns for CNN-based pneumonia detection apps that highlight affected areas in real-time?

We have successfully trained a CNN model that classifies pneumonia with good accuracy and integrated basic image uploads in Django. We tried generating simple Grad-CAM heatmaps in Python, but we’re unsure how to connect these visualizations efficiently to Django views while handling real-time predictions. We expect a seamless workflow where users can upload X-rays and see both classification and highlighted areas immediately, but current attempts are slow or messy.

Вернуться на верх